TV Re-runs, Watson and My Blog

When I was a wee lad … back in the 60s … I used to rush home from elementary school to watch the re-runs on TV.  This was long before middle school and girls.  HOMEWORK, SCHMOMEWORK !!!  … I just had to see those re-runs before anything else.  My favorites were I Love Lucy, Batman, Leave It To Beaver and The Munsters.  I also watched The Patty Duke Show (big time school boy crush) but my male ego prevents me from admitting I liked it.  Did you know the invention of the re-run is credited to Desi Arnaz?  The man was a genius even though Batman was always my favorite.  Still is.  I had my priorities straight even back then.

I am reminded of this because I have that same Batman-like re-run giddiness as I think about the upcoming re-runs of Jeopardy! currently scheduled to air September 12th – 14th.

You’ve probably figured out why I am so excited, but in case you’ve been living in a cave, not reading this blog, or both … IBM Watson competed (and won) on Jeopardy! in February against the two most accomplished Grand Champions in the history of the game show (Ken Jennings and Brad Rutter).  Watson (DeepQA) is the world’s most advanced question answering machine that uncovers answers by understanding the meaning buried in the context of a natural language question.  By combining advanced Natural Language Processing (NLP) and DeepQA automatic question answering technology, IBM was able to demonstrate a major breakthrough in computing.

Unlike traditional structured data, human natural language is full of ambiguity … it is nuanced and filled with contextual references.  Subtle meaning, irony, riddles, acronyms, idioms, abbreviations and other language complexities all present unique computing challenges not found with structured data.  This is precisely why IBM chose Jeopardy! as a way to showcase the Watson breakthrough.

Appropriately, I’ve decided that this posting should be a re-run of my own Watson and content analysis related postings.  So in the sprit of Desi, Lucy, Batman and Patty Duke … here we go:

  1. This is my favorite post of the bunch.  It explains how the same technology used to play Jeopardy! can give you better business insight today.  “What is Content Analytics?, Alex”
  2. I originally wrote this a few weeks before the first match was aired to explain some of the more interesting aspects of Watson.  10 Things You Need to Know About the Technology Behind Watson
  3. I wrote this posting just before the three day match was aired live (in February) and updated it with comments each day.  Humans vs. Watson (Programmed by Humans): Who Has The Advantage?
  4. Watson will be a big part of the future of Enterprise Content Management and I wrote this one in support of a keynote I delivered at the AIIM Conference.   Watson and The Future of ECM  (my slides from the same keynote are posted here).
  5. This was my most recent posting.  It covers another major IBM Research advancement in the same content analysis technology space.  TAKMI and Watson were recognized as part of IBM’s Centennial as two of the top 100 innovations of the last 100 years.  IBM at 100: TAKMI, Bringing Order to Unstructured Data
  6. I wrote a similar IBM Centennial posting about IBM Research and Watson.  IBM at 100: A Computer Called Watson
  7. This was my first Watson related post.  It introduced Watson and was posted before the first match was aired.  Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

Desi Arnaz may have been a genius when it came to TV re-runs but the gang at IBM Research have made a compelling statement about the future of computing.  Jeopardy! shows what is possible and my blog postings show how this can be applied already.  The comments from your peers on these postings are interesting to read as well.

Don’t miss either re-broadcast.  Find out where and when Jeopardy! will be aired in your area.  After the TV re-broadcast, I will be doing some events including customer and public presentations.

On the web …

  • I will presenting IBM Watson and the Future of Enterprise Content Management on September 21, 2011 (replay here).
  • I will be speaking on Content Analytics in a free upcoming AIIM UK webinar on September 30, 2011 (replay here).

Or in person …

You might also want to check out the new Smarter Planet interview with Manoj Saxena (IBM Watson Solutions General Manager)

As always, your comments and thoughts are welcome here.

IBM at 100: TAKMI, Bringing Order to Unstructured Data

As most of you know … I have been periodically posting some of the really fascinating top 100 innovations of the past 100 years as part of IBM’s Centennial celebration.

This one is special to me as it represents what is possible for the future of ECM.  I wasn’t around for tabulating machines and punch cards but have long been fascinated by the technology developments in the management and use of content.  As impressive as Watson is … it is only the most recent step in a long journey IBM has been pursuing to help computers better understood natural language and unstructured information.

As most of you probably don’t know … this journey started over 50 years ago in 1957 when IBM published the first research on this subject entitled A Statistical Approach to Mechanized Encoding and Searching of Literary InformationFinally … something in this industry older then I am!

Unstructured Information Management Architecture (UIMA)

Another key breakthrough by IBM in this area was the invention of UIMA.  Now an Apache Open Source project and OASIS standard, UIMA is an open, industrial-strength platform for unstructured information analysis and search.  It is the only open standard for text based processing and applications.  I plan to write more on UIMA in a future blog but I mention it here because it was an important step forward for the industry, Watson and TAKMI (now known as IBM Content Analytics).

TAKMI

In 1997, IBM researchers at the company’s Tokyo Research Laboratory pioneered a prototype for a powerful new tool capable of analyzing text. The system, known as TAKMI (for Text Analysis and Knowledge Mining), was a watershed development: for the first time, researchers could efficiently capture and utilize the wealth of buried knowledge residing in enormous volumes of text. The lead researcher was Tetsuya Nasukawa.

Over the past 100 years, IBM has had a lot of pretty important inventions but this one takes the cake for me.  Nasukawa-san once said,

“I didn’t invent TAKMI to do something humans could do, better.  I wanted TAKMI to do something that humans could not do.”

In other words, he wanted to invent something humans couldn’t see or do on their own … and isn’t that the whole point and value of technology anyway?

By 1997, text was searchable, if you knew what to look for. But the challenge was to understand what was inside these growing information volumes and know how to take advantage of the massive textual content that you could not read through and digest.

The development of TAKMI quietly set the stage for the coming transformation in business intelligence. Prior to 1997, the field of analytics dealt strictly with numerical and other “structured” data—the type of tagged information that is housed in fixed fields within databases, spreadsheets and other data collections, and that can be analyzed by standard statistical data mining methods.

The technological clout of TAKMI lay in its ability to read “unstructured” data—the data and metadata found in the words, grammar and other textual elements comprising everything from books, journals, text messages and emails, to health records and audio and video files. Analysts today estimate that 80 to 90 percent of any organization’s data is unstructured. And with the rising use of interactive web technologies, such as blogs and social media platforms, churning out ever-expanding volumes of content, that data is growing at a rate of 40 to 60 percent per year.

The key for the success was natural language processing (NLP) technology. Most of the data mining researchers were treating English text data as a bag of words by extracting words from character strings based on white spaces. However, since Japanese text data does not contain white spaces as word separators, IBM researchers in Tokyo applied NLP for extracting words, analyzing their grammatical features, and identifying relationships among words. Such in-depth analysis led to better results in text mining. That’s why the leading-edge text mining technology originated in Japan.

The complete article on TAKMI can be found at http://www.ibm.com/ibm100/us/en/icons/takmi/

Fast forward to today.  IBM has since commercialized TAKMI as IBM Content Analytics (ICA), a platform to derive rapid insight.  It can transform raw information into business insight quickly without building models or deploying complex systems enabling all knowledge workers to derive insight in hours or days … not weeks or months.  It helps address industry specific problems such as healthcare treatment effectiveness, fraud detection, product defect detection, public safety concerns, customer satisfaction and churn, crime and terrorism prevention and more.

I’d like to personally congratulate Nasukawa-san and the entire team behind TAKMI (and ICA) for such an amazing achievement … and for making the list.  Selected team members who contributed to TAKMI are Tetsuya Nasukawa, Kohichi Takeda, Hideo Watanabe, Shiho Ogino, Akiko Murakami, Hiroshi Kanayama, Hironori Takeuchi, Issei Yoshida, Yuta Tsuboi and Daisuke Takuma.

It’s a shining example of the best form of innovation … the kind that enables us to do something not previously possible.  Being recognized along with other amazing achievements like the UPC code, the floppy disk, magnetic stripe technology, laser eye surgery, the scanning tunneling microscope, fractal geometry, human genomics mapping is really amazing.

This type of enabling innovation is the future of Enterprise Content Management.  It will be fun and exciting to see if TAKMI (Content Analytics) has the same kind of impact on computing as the UPC code has had on retail shopping … or as laser eye surgery has had on vision care.

What do you think?  As always, leave for your thoughts and comments.

Other similar postings:

Watson and The Future of ECM

“What is Content Analytics?, Alex”

10 Things You Need to Know About the Technology Behind Watson

Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy! 

It’s a Bird … It’s a Plane … It’s ACM! (Advanced Case Management)

ECM and BPM evil doers beware!  The days of creeping requirements … endless application rollout delays … one-size fits all user experiences … and blaming IT for all of it are over!

Advanced Case Management is here to save us.  Long before this superhero capability arrived from a smarter planet, we’ve had to use a bevy of workflow and BPM technologies to address the needs of case-centric processes.  In most cases, this has not worked well.  That’s because case-centric processes are different.

Traditional BPM processes tend to be straight-through and transactional with the objective of completing the process in the most efficient way and at the lowest possible cost and risk.

Case centric processes are not straight-through.  They are ad-hoc, collaborative and involve exceptions … sometimes, lots of exceptions.  In certain cases, these processes are so ad-hoc or collaborative that it is not realistic or possible to map them.  That’s because the objective is to make the best decision (within the context of the case) and the path to the right decision may not be known.  Speed and cost are always important but take backseat to achieving the best outcome … which usually involves customers, partners, employees or even citizens / patients.  You get the idea.

Why should you care?  Most “C” level survey these days lists Reinventing Customer Relationships at a top priority.  The same goals are seen again and again:

  • Get closer to customers (top theme)
  • Better understand our what customers need
  • Deliver unprecedented customer service

From a technology perspective … this means we need new tools to build those solutions that enable us to get closer, better understand and deliver optimal service to our customers.  Most customer oriented processes are case centric involving human interactions.  They tend not to be straight-through.

The traditional BPM model which depends on (1) process modeling, (2) process automation and (3) process optimization works fine for the straight-through processes … not so much for case management.

As such, a big gap exists today to build solutions that drive better case outcomes.  To close this gap, new tools that bring people, process and information together in the context of a case are needed when:

  • Processes are collaborative and ad-hoc
  • Activities are event-driven
  • Work is knowledge intensive
  • Content is essential for decision making
  • Outcomes are goal-oriented
  • The judgment of people impact how the goal is achieved
  • Process is often not predetermined

The discipline of case management is deeply rooted in industries like healthcare, public sector and the legal profession.  Case management concepts are being applied across all industries – and though organizations describe case management differently – they consistently describe the lack of tools needed for their knowledge workers to get their jobs done.  Some organizations may describe their challenges as complaint / dispute management, investigations, interventions, claims processing or other forms of business functions that have a common pattern or problem but not a straight-through process.  Cases also typically involve invoices, contracts, employees, vendors, customers, projects, change requests, exceptions, incidents, audits, electronic discovery and more.

Faster then a speeding bullet!

Yesterday’s BPM development tools simply don’t work for case management applications.  By the time you build the application, too much time has past, requirements change and IT usually gets the blame.  Time-to-value suffers.  I have nothing against BPM application development tools.  I just wouldn’t use a screwdriver to hammer a nail … and neither should you.  Case management solutions require a new kind of development environment and tools.  We need tools that are easy to use and allow a business user (not just IT) to very quickly build a solution.  They should be able to address the comprehensive nature of all case assets and provide a 360 degree view of a case.  They should leverage templates for a fast-start and represent industry best practices.  In the end, they need to significantly shorten time-to-value relative to other approaches.

More powerful then a locomotive!

Since the objective is to empower case based decision making, we need user experiences that are more robust and flexible then those of the past.  We need those experiences to be role-based and personalized so the end-user gets exactly the information they need to progress the case.  The user experience needs to be flexible and extensible … not to mention configurable, to meet unique business, case or user requirements.  The user experience should provide deep contextual data for case work and eliminate disjointed jumping between applications.  It must bring people, process and information together to drive case progression and optimal outcomes.  That way, a single case worker has all the information they need to improve case outcomes.

Able to leap tall buildings in a single bound!

Proactively advising case workers of best practices, historical outcomes, fraud indicators and other relevant insight is also needed.  Leveraging analytics to detect and surface trends, patterns and deviations contributes to better and more consistent outcomes.  In other words, we need powerful analytics for better case outcomes.  Comprehensive reporting and analysis gives case managers visibility across all information types to assess and act quickly.  Real-time dashboards help understand issues before they become a problem.  Unique content analytics can discover deeper case insight.  Bottom line … case managers need insight in order to impact results.

Anatomy of a superhero

Before being rocketed to Earth as some new problem solving superhero technology … a combination of capabilities are needed to address the needs of case management solutions.  Under the cape and tights of any case management superhero technology, you will find six core capabilities in a seamlessly integrated environment:

1 – Content.  By placing the case model in the content repository, information and other artifacts associated with cases are not only selected and viewed but also managed in the context of the case over its lifecycle.  These include collaborations, processes steps, and the other associated case elements.

2 – Process.  Cases may follow static processes that are prescribed for certain business situations.  They may also follow more dynamic paths based on changes to information associated with a case.  Straight through, transactional processes can be called as can more collaborative processes.

3 – Analytics.  Analytics help case workers to make the right decisions in case of fraudulent claims for insurance, social benefit coverage, eligibility for welfare programs and more. Analytics help detect patterns within or across cases or simply optimize the overall case handling to optimize case outcomes.

4 – Rules.  Many decisions in a case depend on set values, e.g. interest rates for loans based on credit rating, approval authority for transaction amounts, etc. By separating rules from process the case handling becomes much more agile as rules can change in lockstep with market changes.

5 – Collaboration.  Finding the right subject matter expert is often critical to make an ad-hoc decision required to bring a case to an optimal closure. Collaboration in form of instant messaging, presence awareness, and team rooms enables an organization and its case workers to work together to drive outcomes.

6 – Social Software.  Dynamic To Do Lists that are role based help case workers establish conversations and actions that must take place to close cases and link to information about the people that can help.  Users can brainstorm on appropriate solutions and actions and create wikis linked to particular case types to assist colleagues in their case work.

If you can’t do those six things … seamlessly … you aren’t very super … or advanced … and you certainly can’t meet the demands of case management solutions.

Advanced Case Management is now saving the world one case and solution at a time.

So “up, up and away” to better case management solutions and outcomes.  As always leave me your thoughts and comments here.

IBM … 100 Years Later

Nearly all the companies our grandparents admired have disappeared.  Of the top 25 industrial corporations in the United States in 1900, only two remained on that list at the start of the 1960s.  And of the top 25 companies on the Fortune 500 in 1961, only six remain there today.  Some of the leaders of those companies that vanished were dealt a hand of bad luck.  Others made poor choices. But the demise of most came about because they were unable simultaneously to manage their business of the day and to build their business of tomorrow.

IBM was founded in 1911 as the Computing Tabulating Recording Corporation through a merger of four companies: the Tabulating Machine Company, the International Time Recording Company, the Computing Scale Corporation, and the Bundy Manufacturing Company.  CTR adopted the name International Business Machines in 1924.  The distinctive culture and product branding has given IBM the nickname Big Blue.

As you read this, IBM begins its 101st year.  As I look back at the last century, there is a path that led us to this remarkable anniversary which has been both rich and diverse.  The innovations IBM has contributed includes products ranging from cheese slicers to calculators to punch cards – all the way up to game-changing systems like Watson.

But what stands out to me is what has remained unchanged.  IBM has always been a company of brilliant problem-solvers.  IBMers use technology to solve business problems.  We invent it, we apply it to complex challenges, and we redefine industries along the way.

This has led to some truly game-changing innovation.  Just look at industries like retail, air travel, and government.  Where would we be without UPC codes, credit cards and ATM machines, SABRE, or Social Security?  Visit the IBM Centennial site to see profiles on 100 years of innovation.

We haven’t always been right though … remember OS/2, the PCjr and Prodigy?

100 years later, we’re still tackling the world’s most pressing problems.  It’s incredibly exciting to think about the ways we can apply today’s innovation – new information based systems leveraging analytics to create new solutions, like Watson – to fulfill the promise of a Smarter Planet through smarter traffic, water, energy, and healthcare.  This promise of the future … is incredibly exciting and I look forward to helping IBM pave the way for continued innovation.

Watch the IBM Centennial film “Wild Ducks” or read the book.  IBM officially released a book last week celebrating the Centennial, “Making the World Work Better: The Ideas that Shaped a Century and a Company”.  The book consists of three original essays by leading journalists. They explore how IBM” has pioneered the science of information, helped reinvent the modern corporation and changed the way the world actually works.

As for me … I’ve been with IBM since the 2006 acquisition of FileNet and am proud to be associated with such an innovative and remarkable company.

IBM at 100: SAGE, The First National Air Defense Network

This week was a reminder of how technology can aid in our nation’s defense as we struck a major blow against terrorism.  Most people don’t realize IBM contributed to our nation’s defense in the many ways it has.  Here is just one example from 1949.

When the Soviet Union detonated their first atomic bomb on August 29, 1949, the United States government concluded that it needed a real-time, state-of-the-art air defense system.  It turned to Massachusetts Institute of Technology (MIT), which in turn recruited companies and other organizations to design what would be an online system covering all of North America using many technologies, a number of which did not exist yet.  Could it be done?  It had to be done.  Such a system had to observe, evaluate and communicate incoming threats much the way a modern air traffic control system monitors flights of aircraft.

This marked the beginning of SAGE (Semi-Automatic Ground Environment), the national air defense system implemented by the United States to warn of and intercept airborne attacks during the Cold War.  The heart of this digital system—the AN/FSQ-7 computer—was developed, built and maintained by IBM.  SAGE was the largest computer project in the world during the 1950s and took IBM squarely into the new world of computing.  Between 1952 and 1955, it generated 80 percent of IBM’s revenues from computers, and by 1958, more than 7000 IBMers were involved in the project.  SAGE spun off a large number of technological innovations that IBM incorporated into other computer products.

IBM’s John McPherson led the early conversations with MIT, and senior management quickly realized that this could be one of the largest data processing opportunities since winning the Social Security bid in the mid-1930s.  Thomas Watson, Jr., then lobbying his father and other senior executives to move into the computer market quickly, recalled in his memoirs that he wanted to “pull out all the stops” to be a central player in the project.  “I worked harder to win that contract than I worked for any other sale in my life.”  So did a lot of other IBMers: engineers designing components, then the computer; sales staff pricing the equipment and negotiating contracts; senior management persuading MIT that IBM was the company to work with; other employees collaborating with scores of companies, academics and military personnel to get the project up and running; and yet others who installed, ran and maintained the IBM systems for SAGE for a quarter century.

The online features of the system demonstrated that a new world of computing was possible—and that, in the 1950s, IBM knew the most about this kind of data processing.  As the ability to develop reliable online systems became a reality, other government agencies and private companies began talking to IBM about possible online systems for them.  Some of those projects transpired in parallel, such as the development of the Semi-Automated Business Research Environment (Sabre), American Airlines’ online reservation system, also built using IBM staff located inPoughkeepsie,New York.

In 1952, MIT selected IBM to build the computer to be the heart of SAGE. MIT’s project leader, Jay W. Forrester, reported later that the company was chosen because “in the IBM organization we observed a much higher degree of purposefulness, integration and “esprit de corps” than in other firms, and “evidence of much closer ties between research, factory and field maintenance at IBM.”  The technical skills to do the job were also there, thanks to prior experience building advanced electronics for the military.

IBM quickly ramped up, assigning about 300 full-time IBMers to the project by the end of 1953. Work was centered in IBM’s Poughkeepsie and Kingston, NY facilities and in Cambridge, Massachusetts, home of MIT.  New memory systems were needed; MITRE and the Systems Development Corporation (part of RAND Corporation) wrote software, and other vendors supplied components.  In June 1956, IBM delivered the prototype of the computer to be used in SAGE.  The press release called it an “electronic brain.”  It could automatically calculate the most effective use of missiles and aircraft to fend off attack, while providing the military commander with a view of an air battle. Although this seems routine in today’s world, it was an enormous leap forward in computing.  When fully deployed in 1963, SAGE included 23 centers, each with its own AN/FSQ-7 system, which really consisted of two machines (one for backup), both operating in coordination.  Ultimately, 54 systems were installed, all collaborating with each other. The SAGE system remained in service until January 1984, when it was replaced with a next-generation air defense network.

Its innovative technological contributions to IBM and the IT industry as a whole were significant.  These included magnetic-core memories, which worked faster and held more data than earlier technologies; a real-time operating system (a first); highly disciplined programming methods; overlapping computing and I/O operations; real-time transmission of data over telephone lines; use of CRT terminals and light pens (a first); redundancy and backup methods and components; and the highest reliability of computer systems (uptime) of the day.  It was the first geographically distributed, online, real-time application of digital computers in the world.  Because many of the technological innovations spun off from this project were ported over to new IBM computers in the second half of the 1950s by the same engineers who had worked on SAGE, the company was quickly able to build on lessons learned in how to design, manufacture and maintain complex systems.

Fascinating to be sure … the full article can be accessed at http://www.ibm.com/ibm100/us/en/icons/sage/

IBM at 100: The 1401 Mainframe

In my continuing series of IBM at 100, I turn to our data processing heritage with the IBM 1401 Data Processing System (which was long before my time).

While the IBM 1401 Data Processing System wasn’t a great leap in power or speed, that was never the point. “It was a utilitarian device, but one that users had an irrational affection for,” wrote Paul E. Ceruzzi in his book, A History of Modern Computing.

There were several keys to the popularity of the 1401 system. It was one of the first computers to run completely on transistors—not vacuum tubes—and that made it smaller and more durable. It rented for US$2500 per month, and was touted as the first affordable general-purpose computer. It was also the easiest machine to program at the time. The system’s software, wrote Dag Spicer, senior curator at the Computer History Museum, “was a big improvement in usability.”

This more accessible computer unleashed pent-up demand for data processing. IBM was shocked to receive 5200 orders for the 1401 computer in just the first five weeks after introducing it—more than was predicted for the entire life of the machine. Soon, business functions at companies that had been immune to automation were taken over by computers. By the mid-1960s, more than 10,000 1401 systems were installed, making it by far the best-selling computer to date.

More importantly, it marked a new generation of computing architecture, causing business executives and government officials to think differently about computing. A computer didn’t have to be a monolithic machine for the elite. It could fit comfortably in a medium-size company or lab. In the world’s top corporations, different departments could have their own computers.

A computer could even wind up operating on an army truck in the middle of a forest. “There was not a very good grasp or visualization of the potential impact of computers—certainly as we know them today—until the 1401 came along,” said Chuck Branscomb, who led the 1401 design team. The 1401 system made enterprises of all sizes believe a computer was useful, and even essential.

By the late 1950s, computers had experienced tremendous changes. Clients drove a desire for speed. Vacuum-tube electronics replaced the electro-mechanical mechanisms of the tabulating machines that dominated information processing in the first half of the century. First came the experimental ENIAC, then Remington Rand’s Univac and the IBM 701, all built on electronics. Magnetic tape and then the first disk drives changed ideas about the accessibility of information. Grace Hopper’s compiler and John Backus’s FORTRAN programming language gave computer experts new ways to instruct machines to do ever more clever and complex tasks. Systems that arose out of those coalescing developments were a monumental leap in computing capabilities.

Still, the machines touched few lives directly. Installed and working computers numbered barely more than 1000. The world, in fact, was ready for a more accessible computer.

The first glimpse of that next generation of computing turned up in an unexpected place:France. “In the mid-1950s, IBM got a wake-up call,” said Branscomb, who ran one of IBM’s lines of accounting machines at the time. French computer upstart Machines Bull came out with its Gamma computers, small and fast compared to goliaths like the IBM 700 series. “It was a competitive threat,” Branscomb recalled.

Bull made IBM and others realize that entities with smaller budgets wanted computers. IBM scrambled together resources to try to make a competing machine. “It was 1957 and IBM had no new machine in development,” Branscomb said. “It was a real problem.”

During June and July 1957, IBM engineers and planners gathered inGermanyto propose several accounting machine designs. The anticipated product of this seven-week conference was known thereafter as the Worldwide Accounting Machine (WWAM), although no particular design was decided upon.

In September 1957, Branscomb was assigned to run the WWAM project. In March 1958, after Thomas Watson, Jr. expressed dissatisfaction with the WWAM project inEurope, the Endicott proposal for a stored-program WWAM was given formal approval as the company’s approach to meeting the need for an electronic accounting machine. The newly assigned project culminated in the announcement of the 1401 Data Processing System (although, for a time it carried the acronym SPACE).

The IBM 1401 Data Processing System—comprising a variety of card and tape models with a range of core memory sizes, and configured for stand-alone use and peripheral service for larger computers—was announced in October 1959.

Branscomb’s group set a target rental cost of US$2500 per month, well below a 700 series machine, and hit it. They also decided the computer had to be simple to operate. “We knew it was time for a dramatic change, a discontinuity,” Branscomb added. And indeed it was. The 1401 system extended computing to a new level of organization and user, driving information technology deeper into everyday life.

The full article can be accessed at http://www.ibm.com/ibm100/us/en/icons/mainframe/

Watson and The Future of ECM

In the past, I have whipped out my ECM powered crystal ball to pontificate about the future of Enterprise Content Management.  These are always fun to write and share (see Top 10 ECM Pet Peeve Predictions for 2011  and Crystal Ball Gazing … Enterprise Content Management 2020).  This one is a little different though …  on the eve of the AIIM International Conference and Expo at info360, I find myself wondering … what are we going to do with all this new social content … all of these content based conversations in all of their various forms?

We’ve seen the rise of the Systems of Engagement concept and number of new systems that enable social business.  We’re adopting new ways to work together leveraging technologies like collaborative content, wikis, communities, RSS and much more.  All of this new content being generated is text based and expressed in natural language.  I suggest you read AIIM’s report Systems of Engagement and the Future of Enterprise IT: A Sea Change in Enterprise for a perspective on the management aspects of the future of ECM.  It lays out how organizations must think about information management, control, and governance in order to deal with social technologies.

Social business is not just inside the firewall though.  Blogs, wikis and social network conversations are giving consumers and businesses a voice and power they’ve never have before … again based in text and expressed in natural language.  This is a big deal.  770 million people worldwide visited a social networking site last year (according to a comScore report titled Social Networking Phenomenon) … and amazingly, over 500 billion impressions annually are being made about products and services (according to a new book Empowered written by Josh Bernoff and Ted Schadler).

But what is buried in these text based natural language conversations?  There is an amazing amout of information trapped inside.  With all these conversations happening between colleagues, customers and partners … what can we learn from our customers about product quality, customer experience, price, value, service and more?  What can we learn from our internal conversations as well?  What is locked in these threads and related documents about strategy, projects, issues, risks and business outcomes.

We have to find out!  We have to put this information to work for us.

But guess what?  The old tools don’t work.  Data analysis is a powerful thing but don’t expect today’s business intelligence tools to understand language and threaded conversations.  When you analyze data … a 5 is always a 5.  You don’t have to understand what a 5 is or figure out what it means.  You just have to calculate it against other numeric indicators and metrics.

Content … and all of the related conversations aren’t numeric.  You must start by understanding what it all means, which is why understanding natural language is key.  Historically, computers have failed at this.  New tools and techniques are needed because content is a whole different challenge.  A very big challenge.  Think about it … a “5” represents a value, the same value, every single time.  There is no ambiguity.  In natural language, the word “premiere” could be a noun, verb or adjective.  It could be a title of a person, an action or the first night of a theatre play.  Natural language is full of ambiguity … it is nuanced and filled with contextual references.  Subtle meaning, irony, riddles, acronyms, idioms, abbreviations and other language complexities all present unique computing challenges not found with structured data.  This is precisely why IBM chose Jeopardy! as a way to showcase the Watson breakthrough.

IBM Watson (DeepQA) is the world’s most advanced question answering machine that uncovers answers by understanding the meaning buried in the context of a natural language question.  By combining advanced Natural Language Processing (NLP) and DeepQA automatic question answering technology, Watson represents the future of content and data management, analytics, and systems design.  IBM Watson leverages core content analysis, along with a number of other advanced technologies, to arrive at a single, precise answer within a very short period of time.  The business applications for this technology are limitless starting with clinical healthcare, customer care, government intelligence and beyond.

You can read some of my other blog postings on Watson (see “What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy! … or better yet … if you want to know how Watson actually works, hear it live at my AIIM / info360 main stage session IBM Watson and the Impact on ECM this coming Wednesday 3/23 at 9:30 am.

BLOG UPDATE:  Here is a link to the slides used at the AIIM / info360 keynote.

Back to my crystal ball … my prediction is that natural language based computing and related analysis is the next big wave of computing and will shape the future of ECM.  Watson is an enabling breakthrough and is the start of something big.  With all this new information, we’ll want to use to understand what is being said, and why, in all of these conversations.  Most of all, we’ll want to leverage this new found insight for business advantage.  One compelling and obvious example is to be to answer age old customer questions like “Are our customers happy with us?” “How happy” “Are they so happy, we should try to sell something else?” … or … “Are our customers unhappy?” “Are they so unhappy, we should offer them something to prevent churn?” Undestanding the customer trends and emerging opportunities across a large set of text based conversations (letters, calls, emails, web postings and more) is now possible.

Who wouldn’t want to undertstand their customers, partners, constituents and employees better?  Beyond this, Watson will be applied to industries like healthcare to help doctors more effectively diagnose diseases and this is just the beginning.  Organizations everywhere will want to unlock the insights trapped in their enterprise content and leverage all of these conversations … in ways we haven’t even thought of yet … but I’ll save that for the next time I use my ECM crystal ball.

As always … leave me your thoughts and ideas here and hope to see you Wednesday at The AIIM International Conference and Expo at info360 http://www.aiimexpo.com/.

IBM at 100: UPC … The Transformation of Retail

In my continuing series of IBM at 100 achievements … this is one of my favorites of all the ones I plan to republish here. The humble Universal Product Code (UPC), also known as the bar code, along with the related deployment of scanners, fundamentally changed many of the practices of retailers and all organizations that buy and move things, from large industrial equipment to pencils purchased in stationery stores. These two technologies led to the use of in-store information processing systems in almost every industry around the world, applied to millions of types of goods and items. UPC is planet Earth’s most pervasive inventory tracking tool.

N. Joseph Woodland, later an IBMer but then working at Drexel Institute of Technology, applied for the first patent on bar code technology on October 20, 1949, and along with Bernard Silver, received the patent on October 7, 1952. And there it sat for more than two decades. In those days there was no way to read the codes, until the laser became a practical tool. About 1970 at IBM Research Triangle Park, George Laurer went to work on how to scan labels and to develop a digitally readable code. Soon a team formed to address the issue, including Woodland. Their first try was a bull’s-eye bar code; nobody was happy with it because it took up too much space on a carton.

Meanwhile, the grocery industry in post-war America was adapting to the boom in suburban supermarkets–seeking to automate checkout at stores to increase speed, drive down the cost of hiring so many checkout clerks and systematize in-store inventory management. Beginning in the 1960s, various industry task forces went to work defining requirements and technical specifications. In time the industry issued a request to computer companies to submit proposals.

IBM’s team had also reworked its design going to the now familiar rows of bars each containing multiple copies of data. Woodland, who had helped create the original bull’s-eye design, then later worked on the bar code, writing IBM’s response to the industry’s proposal. Another group of IBMers at the Rochester, Minnesota Laboratory built a prototype scanner using optics and lasers. In 1973, the grocery industry’s task force settled on a standard that very closely paralleled IBM’s approach. The industry wanted a standard that all grocers and their suppliers could use.

IBM was well positioned and became one of the earliest suppliers of scanning equipment to the supermarket world. On October 11, 1973, IBM became one of the earliest vendors to market with a system, called the IBM 3660. In time it became a workhorse in the industry. It included a point-of-sale terminal (digital cash register) and checkout scanner that could read the UPC symbol. The grocery industry compelled its suppliers of products in boxes and cans to start using the code, and IBM helped suppliers acquire the technology to work with the UPC.

On June 26, 1974, the first swipe was done at a Marsh’s supermarket in Troy, Ohio, which the industry had designated as a test facility. The first product swiped was a pack of Wrigley’s Juicy Fruit chewing gum, now on display at the Smithsonian’s National Museum of American History in Washington, D.C. Soon, grocery stores began adopting the new scanners, while customers were slowly educated on their accuracy in quoting prices.

If there had been any doubts about the new system’s prospects, they were gone by the end of the 1970s. The costs of checking out customers went down; the accuracy of transactions went up; checkouts sped up by some 40 percent; and in-store inventory systems dramatically improved management of goods on hand, on order or in need of replenishment. And that was just the beginning. An immediate byproduct was the ability of stores to start tracking the buying habits of customers in general and, later, down to the individual, scanning bar coded coupons and frequent shopper cards. In the four years between 1976 and 1980, the number of grocery stores using this technology jumped from 104 to 2,207, and they were spreading to other countries.

In the 1980s, IBM and its competitors introduced the new technology to other industries (including variations of the American standard bar codes that were adopted in Western Europe). And IBM Raleigh kept improving the technology. In December 1980, IBM introduced the 3687 scanner that used holographic technologies—one of the first commercial applications of this technology. In October 1987, the IBM 7636 Bar Code Scanner was introduced–and as a result, throughout the 1980s factories adopted the IBM bar code to track in-process inventory. Libraries used it to do the same with books. In the 1990s, hand-held scanners made it easier to apply bar codes to things beyond cartons and cans and to scan them, eventually using wireless technology. Meanwhile innovation expanded in the ability of a bar code to hold more information.

These technologies make it possible for all kinds of organizations, schools, universities and companies in all industries to leverage the power of computers to manage their inventories. In many countries, almost every item now purchased in a retail store has a UPC printed on it, and is scanned. UPC led to the retirement of the manual and electro-mechanical cash registers which, as a technology, had been around since the 1880s. By the early 2000s, bar code technologies had become a $17 billion business, scanned billions of times each day.

The full text of this article can be found on IBM at 100: http://www.ibm.com/ibm100/us/en/icons/upc/

Humans vs. Watson (Programmed by Humans): Who Has The Advantage?

DAY 3 UPDATE:  If you are a technology person, you had to be impressed.  We all know who won by now so I won’t belabor it.  Ken Jennings played better and made a game of it … at least for a while.  He seemed to anticipate the buzz a little bit better and got on a roll.

You may have noticed that Watson struggled in certain categories last night.  “Actors Who Direct” gave very short clues (or questions) like “The Great Debaters” for which the correct answer was “Who is Denzel Washington”.  For Watson, the longer the question, the better.  If it takes a longer time for Alex to read the question, Watson has more time to consider candidate answers, evidence scores and confidence rankings.  This is another reason why Watson does better in certain categories.  In an attempt to remain competitive in this situation, Watson has multiple ways to process clues or questions.  There is what is called the “short path” (to an answer).  This is used for shorter questions when Watson has less time to decide whether to buzz in or not.  Watson is more inconsistent when it has to answer faster.  As seen last night, he either chose not to answer or Ken and Brad beat him to it.

In the end, the margin of victory was decisive for Watson.  In total, $1.25 million was donated to charity and Ken and Brad took home a parting gifts of $150,000 and $100,000 respectively … pretty good for all involved.  The real winners are science and technology.   This is a major advance in computing that could revolutionize the way we interact with computers … especially with questions and answers.  The commercial applications seem endless.

DAY 2 UPDATE:  Last night was compelling to watch.  I was at the Washington, DC viewing event with several hundred customers, partners and IBMers.  The atmosphere in the briefing center was electric.  When the game started with Watson taking command, the room erupted in cheers.  After Watson got on a roll, and steamrolled Brad and Ken for most of Double Jeopardy, the room began to grow silent in awe of what was happening. 

Erik Mueller (IBM Research) was our featured speaker.  He was bombarded … before, during and after the match with questions like “How does he know what to bet?”  “How does Watson process text?”  How would this be used in medical research?”  “What books were in Watson’s knowledge base?”  “Can Watson hear?” “Does he have to press a button like the human contestants?” and many more.

I was there as a subject matter expert and even though the spotlight was rightfully on Eric, I did get to answer a question on how some of Watson’s technology was being used today.  I explained how our IBM Content Analytics is used and how it is helping to power Watson’s natural language prowess.

When Watson incorrectly answered “What is Toronto????” in Final Jeopardy, the room audibly gasped (myself included).  As everyone seemed to hold their breath, I looked at Erik and he was smiling like a Cheshire cat … brimming with confidence.  The room cheered and applauded when the Watson’s small bet was revealed … a seeming acknowledgement to the technological brilliance.  Applause for a wrong answer!

Afterwards, there were many ideas on how Watson could be applied.  My favorite was from a legal industry colleague who had a number of suggestions for how Watson could optimize document review and analysis that is currently a problem for judges and litigators.

Yesterday (below) I said the human’s have a slight advantage.  And while Watson has built an impressive lead, I still feel that way.  Many of yesterday’s categories played to Watson’s fact based strengths.  It could go the other way tonight and Brad and Ken could get right back into the match.  The second game will air tonight in its entirety and the scores from both games will be combined to determine the $1 million prize winner.  Watson is entering tonight with a more than $25,000 lead.  IBM is donating all prize winnings to charity and Ken Jennings and Brad Rutter are donating 50% to charity.

DAY 1 POST:  After Day 1, Watson is tied with Brad Rutter at $5,000 going into Double Jeopardy – which is pretty impressive.  Ken Jennings has yet to catch his stride.  Brad and Ken seemed a little shell shocked at first, but Brad rebounded right when Watson was faltering towards the end of the first round.  This got me to thinking I should go into a little more detail about who really has the advantage … Watson or the humans? 

If you watched it last night, you may have observed that Watson does very well with factual questions.  He did very well in the Beatles song category – they were mostly facts with contextual references to lyrics.  Answers that involve multiple facts, all of which are required to answer the correct response but are unlikely to be found the same place, are much harder for Watson.  This is why Watson missed the Harry Potter question involving Lord Voldemort.  Watson also switched categories frequently which is part of his game strategy.  You may have also noticed that Watson can’t see or hear.  He answered a question wrong even though Ken gave the same wrong answer seconds before.  More on this later in the post.

Here goes … my take on who has the advantage …

Question Understanding :  Advantage Humans

Humans:  Seemingly Effortless.  Almost instantly knows what is being asked, what is important and how it applies – very naturally gets focus, references, hints, puns, implications, etc.

Watson:  Hugely Challenging.  Has to be programmed to analyze enormous numbers of possibilities to get just a hint of the relevant meaning.  Very difficult due to variability, implicit context, ambiguityof structure and meaning in language.

Language Understanding:  Advantage Humans

Humans:  Seemingly Effortless.  Powerful, general, deep and fast in understanding language – reading, experiencing, summarizing, storing knowledge in natural language.  This information is written for human consumption so reading and understanding what it says is natural for humans.

Watson:  Hugely Challenging.  Answers need to be determined and justified in natural language sources like news articles, reference texts, plays, novels, etc.  Watson must be carefully programmed and automatically trained to deeply analyze even just tiny subsets of language effectively.  Very different from web search, must find a precise answer and understand enough of what it read to know if and why a possible answer may be correct.

Self‐Knowledge (Confidence):  Advantage Humans

Humans:  Seemingly Effortless.  Most often, and almost instantly, humans know if they know the answer.

Watson:  Hugely Challenging.  1000’s of algorithms run in parallel to find and analyze 1000’s of written texts for many different types of evidence.  The results are combined, scored and weighed for their relative importance – how much they justify a candidate answer.  This has to happen in 3 seconds to compute a confidence and decide whether or not to ring-in before it is too late.

Breadth of Knowledge:  Advantage Humans

Humans:  Limited by self-contained memory.  Estimates of >1000’s of terabytes are all much higher than Watson’s memory capacity.  Ability to flexibly understand and summarize human relevance means that humans’ raw input capacity is even higher.

Watson:  Limited by self‐contained memory.  Roughly 1 Million books worth of content stored and processed in 15 Terabytes of working memory.  Weaker ability to meaningfully understand, relate and summarize human‐relevant content.  Must look at lots of data to compute statistical relevance.

Processing Speed:  Advantage Humans

Humans:  Fast Accurate Language Processing.  Native, strong, fast, language abilities.  Highly associative, highly flexible memory and speedy recall.  Very fast to speed read clue, accurately grasp question, determine confidence and answer – in just seconds. 

Watson:  Hugely Challenging.  On 1 CPU Watson can take over 2 hours to answer to a typical Jeopardy! question.  Watson must be parallelized, perhaps in ways similar to the brain, to simultaneously use 1000’s of compute cores to compete against humans in the 3-5 second range.

Reaction Speed:  Toss-up

Humans:  Times the Buzz.  Slower raw reaction speed but potentially faster to the buzz.  Listens to clue and anticipates when to buzz in.  “Timing the buzz” like this providing humans with the fastest absolute possible response time.

Watson:  Fast Hand.  More consistently deliver’s a fast reaction time but ONLY IF and WHEN can determine high enough confidence in time to buzz‐in.  Not able to anticipate when to buzz‐in based on listening to clue, which gives fastest possible response time to humans.  Also has to press same mechanical button as humans do.

Compute Power:  Won’t Impact Outcome

Humans:  Requires 1 brain that fits in a shoebox, can run on a tuna‐fish sandwich and be cooled with a hand‐held paper fan.

Watson:  Hugely Challenging.  Needs 2,880 compute cores (10 refrigerators worth in size and space) requiring about 80Kw of power and 20 tons of cooling.

Betting and Strategy:  Advantage Watson

Humans:  Slower, typically less precise.  Uses strategy and adjusts based on situation and game position.

Watson: Faster, more accurate calculations.  Uses strategy and adjusts based on situation and game position.

Emotions:  Advantage Watson

Humans:  Yes. Can slow down and /or confuse processing.

Watson:  No. Does NOT get nervous, tired, upset or psyched out (but the Watson programming team does!).

In-Game Learning:  Advantage Humans

Humans:  Learn very quickly from context, voice expression and (mostly importantly) right and wrong answers.

Watson:  Watson does not have the ability to hear (speech to text).  It is my understanding that Watson is “fed” the correct answer (in text) after each question so he can learn about the category even if he gets it wrong or does not answer.  However, I don’t believe he is “fed” the wrong answers though.  This is a disadvantage for Watson.  As seen last night, it is not uncommon for him to answer with the same wrong answer as another contestant.  This also happened in the sparring rounds leading up to the taping of last nights show.

As you can see things are closely matched but a slight advantage has to go to Ken and Brad.

And what about Watson’s face?

Another observation I made was how cool Watson’s avatar was.  It actually expresses what he is thinking (or processing).  The Watson avatar shares the graphic structure and tonality of the IBM Smarter Planet marketing campaign; a global map projection with a halo of “thought rays.”  The avatar features dozens of differentiated animation states that mirror the many stages of Jeopardy! gameplay – from choosing categories and answering clues, to winning and losing, to making Daily Double wagers and playing Final Jeopardy!.  Even Watson’s level of confidence – the numeric threshold that determines whether or not Watson will buzz in to answer – is made visible.  Watson’s stage presence is designed to depict the interior processes of the advanced computing system that powers it.  A significant portion of the avatar consists of colored threads orbiting around a central core.  The threads and thought rays that make up Watson’s avatar change color and speed depending on what happens during the game.  For example, when Watson feels confident in an answer the rays on the avatar turn green; they turn orange when Watson gets the answer wrong.  You will see the avatar speed up and activate when Watson’s algorithms are working hard to answer a clue.

I’ll be glued to the TV tonight and tomorrow.  Regardless of the outcome, this whole experience has been fascinating to me … so much so that I just published a new podcast on ECM, Content Analytics and Watson.

You can also visit my previous blog postings on Watson at: IBM at 100:  A Computer Called Watson“What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

IBM at 100: A Computer Called Watson

Watson is an efficient analytical engine that pulls many sources of data together in real-time, leverages natural language processing, discovers an insight, and deciphers a degree of confidence.

In my continuing series of IBM at 100 achievements, I saved the Watson achievement posting for today. In an historic event beginning tonight, in February 2011 IBM’s Watson computer will compete on Jeopardy! against the TV quiz show’s two biggest all-time champions. Watson is a supercomputer running software called DeepQA, developed by IBM Research. While the grand challenge driving the project is to win on Jeopardy!, the broader goal of Watson was to create a new generation of technology that can find answers in unstructured data more effectively than standard search technology.

Watson does a remarkable job of understanding a tricky question and finding the best answer. IBM’s scientists have been quick to say that Watson does not actually think. “The goal is not to model the human brain,” said David Ferrucci, who spent 15 years working at IBM Research on natural language problems and finding answers amid unstructured information. “The goal is to build a computer that can be more effective in understanding and interacting in natural language, but not necessarily the same way humans do it.”

Computers have never been good at finding answers. Search engines don’t answer a question–they deliver thousands of search results that match keywords. University researchers and company engineers have long worked on question answering software, but the very best could only comprehend and answer simple, straightforward questions (How many Oscars did Elizabeth Taylor win?) and would typically still get them wrong nearly one third of the time. That wasn’t good enough to be useful, much less beat Jeopardy! champions.

The questions on this show are full of subtlety, puns and wordplay—the sorts of things that delight humans but choke computers. “What is The Black Death of a Salesman?” is the correct response to the Jeopardy! clue, “Colorful fourteenth century plague that became a hit play by Arthur Miller.” The only way to get to that answer is to put together pieces of information from various sources, because the exact answer is not likely to be written anywhere.

Watson leverages IBM Content Analytics for part of the natural language processing. Watson runs on a cluster of PowerPC 750™ computers—ten racks holding 90 servers, for a total of 2880 processor cores. It’s really a room lined with black cabinets stuffed with hundreds of thousands of processors plus storage systems that can hold the equivalent of about one million books worth of information. Over a period of years, Watson was fed mountains of information, including text from commercial sources, such as the World Book Encyclopedia, and sources that allow open copying of their content, such as Wikipedia and books from Project Gutenberg.  Learn more about the technology under the covers on my previous posting 10 Things You Need to Know About the Technology Behind Watson.

When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.

By late 2010, in practice games at IBM Research in Yorktown Heights, N.Y., Watson was good enough at finding the correct answers to win about 70 percent of games against former Jeopardy! champions. Then in early 2011, Watson went up against Jeopardy! superstars Ken Jennings and Brad Rutter.

Watson’s question-answering technology is expected to evolve into a commercial product. “I want to create something that I can take into every other retail industry, in the transportation industry, you name it,” John Kelly, who runs IBM Research, told The New York Times. “Any place where time is critical and you need to get advanced state-of-the-art information to the front decision-makers. Computers need to go from just being back-office calculating machines to improving the intelligence of people making decisions.”

When you’re looking for an answer to a question, where do you turn? If you’re like most people these days, you go to a computer, phone or mobile device, and type your question into a search engine. You’re rewarded with a list of links to websites where you might find your answer. If that doesn’t work, you revise your search terms until able to find the answer. We’ve come a long way since the time of phone calls and visits to the library to find answers.

But what if you could just ask your computer the question, and get an actual answer rather than a list of documents or websites? Question answering (QA) computing systems are being developed to understand simple questions posed in natural language, and provide the answers in textual form. You ask “What is the capital of Russia?” The computer answers “Moscow,” based on the information that has been loaded into it.

IBM is taking this one step further, developing the Watson computer to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately demonstrate confidence to deliver precise final answers. Because of its deeper understanding of language, it can process and answer more complex questions that include puns, irony and riddles common in natural language. On February 14–16, 2010, IBM’s Watson computer will be put to the test, competing in three episodes of Jeopardy! against the two most successful players in the quiz show’s history: Ken Jennings and Brad Rutter.

The full text of this article can be found on IBM at 100: http://www.ibm.com/ibm100/us/en/icons/watson/

As for me … I am anxiously waiting to see what happens starting tonight.  See my previous blog postings on Watson at:  “What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

Good luck tonight to Watson, Ken Jennings and Brad Rutter … may the best man win (so to speak)!