How To Quickly Tell If You Have An Innovation Problem

At a recent speaking engagement, I was asked if there was a quick way to tell if an organization has an innovation problem.  The organization in question has a long and proud innovation track record … and has been meeting its revenue and cost objectives. On the surface, all seemed to be in order … but that was not the case.

As I pondered the question my brain quickly rifled through various best practices for analyzing product lines and portfolios including the Boston Consulting Group Growth Share Matrix first published in 1970 by BCG founder Bruce Henderson.  The matrix is based on the clever use of question marks, cash cows, dogs and stars as way to stratify a given portfolio … and to help allocate resources based on two factors (company competitiveness and market attractiveness).  While over 40, the model and methodology remain viable and are still widely used.

There are other approaches such as the Deloitte Consulting Growth Framework … but my preference is the McKinsey 3 Horizons of Growth.

The McKinsey model has also stood the test of time and is more intuitive (at least to me). It addresses a fuller spectrum of portfolio analysis issues and breaks down as follows:

  • Horizon 1 – Extend and defend core businesses.
  • Horizon 2 – Build emerging businesses.
  • Horizon 3 – Create viable options.

It is based on the traditional “S” curve adoption and growth principle but asserts that at a key point on the adoption curve, new innovation (and investment) is needed to enable future horizons of growth as indicated above.  Each horizon ensures future waves of new revenue growth and continued innovation.  In all, 3 horizons are needed.  Each horizon requires a different approach, people, skills and management method.  As you might suspect, each horizon level is also increasingly intrapreneurial. Most importantly, you need to manage all three horizons concurrently … even though based on different principles:

  • Horizon 1 – This is typically a fully capable or mature offering / platform that is being managed by “business maintainers” using traditional performance, operational and profit metrics such as return on invested capital (ROIC).
  • Horizon 2 – This is typically new capabilities that are being built-out or acquired in emerging business scenarios by “business builders” based on growth aspirations using metrics such as net present value (NPV).  This stage is well past the experimentation phase, has early adopters and expected to show scalable grow in the near future … followed by profit soon thereafter.  The Crossing The Chasm model by Geoffrey Moore comes to mind for me.
  • Horizon 3 – This is the experimentation phase where requirements may be unclear.  It needs to be led by “evangelists or visionaries” and governed by validation or iteration metrics such as number of interviews, feedback sessions, number of iterations or other early stage progress metrics. It is typified by prototypes, market validation, agile development and directional pivots.  The Lean Startup concept by Eric Ries comes to mind.

Upon some investigation, the balance of investment (for the company in question) was far too heavy on near-term (or proven) revenue performance offerings (Horizon 1) and not enough on longer-term growth options (Horizons 2 and 3).  In light of conservative spending by most companies coming out of the recession, this was not an unexpected finding.  The tendency in business for the past few years has been to focus on short-term initiatives … sometimes at the expense of ensuring future growth options.

A simple mapping of your own portfolio of offerings to the three horizons may be just as revealing as it was in this case.

Most organizations should strive for roughly 70% investment on Horizon 1 offerings, 20% investment on Horizon 2 and potentially as much as 10% on Horizon 3 offerings.  These percentages may vary from company to company … and industry to industry … but represent a reasonable breakdown for any organization to evaluate itself.

When was the last time you evaluated your offerings using some method like this? If you don’t know the answer, or if it has been longer then 12 months, then invest the time do this.  Your future could literally depend on it.

I feel good that I was able to steer them in the right direction using such a proven method to manage innovation.  My innovation initiatives are progressing.  The Intrapreneurship: Tackling The Challenges of Bringing New Innovation to Market AIPMM webinar replay is available in case you missed the live event and The First Annual Intrapreneurship Benchmark Survey on Commercializing Innovation survey remains open through June 30, 2014.  I plan to compile, analyze and publish the survey findings in Q3.

As always, leave me your thoughts and opinions here.

Help Stop the Senseless Killing of Important Innovation

The great business management philosopher Yogi Berra once said, “If you don’t know where you are going, you’ll end up someplace else.”  This is how I feel about the current state of business innovation.  New innovation usually starts out on the right path but often ends up in the ditch.

Innovation is clearly a key driver of organic growth for all companies.  67% of the most innovative companies say innovation is a competitive necessity regardless of sector or geography.  Leading innovators have grown 16% higher then that of the least innovative companies(1).  It’s no wonder that 91% of companies believe innovation is a top strategic priority(2).

It turns out that many organizations actually struggle to bring organic innovation to market.  Even though 84% of executives say innovation is extremely or very important to their companies’ growth strategy, only 39% say their companies are good at commercializing new products or services (3).

Connecting innovation programs (and resulting new inventions) to what happens later with market adoption (and revenue growth) isn’t possible without what happens in between.  My theory is that there is a critical gap … one that is unnecessarily killing many promising innovations.

Not enough is known about the reasons why so many organizations struggle to bring their own organic innovation to market.

With that in mind, I have decided to create  The First Annual Intrapreneurship Benchmark Survey on Commercializing Innovation.  The survey is available immediately and can be accessed at https://www.surveymonkey.com/s/intrapreneurshipbenchmark.  It will collect data to establish new benchmarks.  Once collected, the data will be analyzed and included in a written report.  The full report will be made available to survey respondents.   Highlights will also published later in 2014 in selected business publications as well as on this website.

You can help stop the senseless killing of important innovation too.  After all, “When you come to a fork in the road take it” – also Yogi Berra.

 Take this fork with me.  Seriously, please help me by taking a few minutes of your time to share your experiences in this area by taking the survey.

As always, leave me you thoughts and comments below.

Footnotes:

(1) PWC Report “Breakthrough Innovation and Growth” – September 2013

(2) GE Report “Global Information Barometer” – January 2013

(3) McKinsey & Company Report “Innovation and Commercialization” – 2010

(4) Ernst & Young Report “Igniting Innovation: How Hot Companies Fuel Growth from Within” – 2010

 

IBM at 100: SAGE, The First National Air Defense Network

This week was a reminder of how technology can aid in our nation’s defense as we struck a major blow against terrorism.  Most people don’t realize IBM contributed to our nation’s defense in the many ways it has.  Here is just one example from 1949.

When the Soviet Union detonated their first atomic bomb on August 29, 1949, the United States government concluded that it needed a real-time, state-of-the-art air defense system.  It turned to Massachusetts Institute of Technology (MIT), which in turn recruited companies and other organizations to design what would be an online system covering all of North America using many technologies, a number of which did not exist yet.  Could it be done?  It had to be done.  Such a system had to observe, evaluate and communicate incoming threats much the way a modern air traffic control system monitors flights of aircraft.

This marked the beginning of SAGE (Semi-Automatic Ground Environment), the national air defense system implemented by the United States to warn of and intercept airborne attacks during the Cold War.  The heart of this digital system—the AN/FSQ-7 computer—was developed, built and maintained by IBM.  SAGE was the largest computer project in the world during the 1950s and took IBM squarely into the new world of computing.  Between 1952 and 1955, it generated 80 percent of IBM’s revenues from computers, and by 1958, more than 7000 IBMers were involved in the project.  SAGE spun off a large number of technological innovations that IBM incorporated into other computer products.

IBM’s John McPherson led the early conversations with MIT, and senior management quickly realized that this could be one of the largest data processing opportunities since winning the Social Security bid in the mid-1930s.  Thomas Watson, Jr., then lobbying his father and other senior executives to move into the computer market quickly, recalled in his memoirs that he wanted to “pull out all the stops” to be a central player in the project.  “I worked harder to win that contract than I worked for any other sale in my life.”  So did a lot of other IBMers: engineers designing components, then the computer; sales staff pricing the equipment and negotiating contracts; senior management persuading MIT that IBM was the company to work with; other employees collaborating with scores of companies, academics and military personnel to get the project up and running; and yet others who installed, ran and maintained the IBM systems for SAGE for a quarter century.

The online features of the system demonstrated that a new world of computing was possible—and that, in the 1950s, IBM knew the most about this kind of data processing.  As the ability to develop reliable online systems became a reality, other government agencies and private companies began talking to IBM about possible online systems for them.  Some of those projects transpired in parallel, such as the development of the Semi-Automated Business Research Environment (Sabre), American Airlines’ online reservation system, also built using IBM staff located inPoughkeepsie,New York.

In 1952, MIT selected IBM to build the computer to be the heart of SAGE. MIT’s project leader, Jay W. Forrester, reported later that the company was chosen because “in the IBM organization we observed a much higher degree of purposefulness, integration and “esprit de corps” than in other firms, and “evidence of much closer ties between research, factory and field maintenance at IBM.”  The technical skills to do the job were also there, thanks to prior experience building advanced electronics for the military.

IBM quickly ramped up, assigning about 300 full-time IBMers to the project by the end of 1953. Work was centered in IBM’s Poughkeepsie and Kingston, NY facilities and in Cambridge, Massachusetts, home of MIT.  New memory systems were needed; MITRE and the Systems Development Corporation (part of RAND Corporation) wrote software, and other vendors supplied components.  In June 1956, IBM delivered the prototype of the computer to be used in SAGE.  The press release called it an “electronic brain.”  It could automatically calculate the most effective use of missiles and aircraft to fend off attack, while providing the military commander with a view of an air battle. Although this seems routine in today’s world, it was an enormous leap forward in computing.  When fully deployed in 1963, SAGE included 23 centers, each with its own AN/FSQ-7 system, which really consisted of two machines (one for backup), both operating in coordination.  Ultimately, 54 systems were installed, all collaborating with each other. The SAGE system remained in service until January 1984, when it was replaced with a next-generation air defense network.

Its innovative technological contributions to IBM and the IT industry as a whole were significant.  These included magnetic-core memories, which worked faster and held more data than earlier technologies; a real-time operating system (a first); highly disciplined programming methods; overlapping computing and I/O operations; real-time transmission of data over telephone lines; use of CRT terminals and light pens (a first); redundancy and backup methods and components; and the highest reliability of computer systems (uptime) of the day.  It was the first geographically distributed, online, real-time application of digital computers in the world.  Because many of the technological innovations spun off from this project were ported over to new IBM computers in the second half of the 1950s by the same engineers who had worked on SAGE, the company was quickly able to build on lessons learned in how to design, manufacture and maintain complex systems.

Fascinating to be sure … the full article can be accessed at http://www.ibm.com/ibm100/us/en/icons/sage/

Humans vs. Watson (Programmed by Humans): Who Has The Advantage?

DAY 3 UPDATE:  If you are a technology person, you had to be impressed.  We all know who won by now so I won’t belabor it.  Ken Jennings played better and made a game of it … at least for a while.  He seemed to anticipate the buzz a little bit better and got on a roll.

You may have noticed that Watson struggled in certain categories last night.  “Actors Who Direct” gave very short clues (or questions) like “The Great Debaters” for which the correct answer was “Who is Denzel Washington”.  For Watson, the longer the question, the better.  If it takes a longer time for Alex to read the question, Watson has more time to consider candidate answers, evidence scores and confidence rankings.  This is another reason why Watson does better in certain categories.  In an attempt to remain competitive in this situation, Watson has multiple ways to process clues or questions.  There is what is called the “short path” (to an answer).  This is used for shorter questions when Watson has less time to decide whether to buzz in or not.  Watson is more inconsistent when it has to answer faster.  As seen last night, he either chose not to answer or Ken and Brad beat him to it.

In the end, the margin of victory was decisive for Watson.  In total, $1.25 million was donated to charity and Ken and Brad took home a parting gifts of $150,000 and $100,000 respectively … pretty good for all involved.  The real winners are science and technology.   This is a major advance in computing that could revolutionize the way we interact with computers … especially with questions and answers.  The commercial applications seem endless.

DAY 2 UPDATE:  Last night was compelling to watch.  I was at the Washington, DC viewing event with several hundred customers, partners and IBMers.  The atmosphere in the briefing center was electric.  When the game started with Watson taking command, the room erupted in cheers.  After Watson got on a roll, and steamrolled Brad and Ken for most of Double Jeopardy, the room began to grow silent in awe of what was happening. 

Erik Mueller (IBM Research) was our featured speaker.  He was bombarded … before, during and after the match with questions like “How does he know what to bet?”  “How does Watson process text?”  How would this be used in medical research?”  “What books were in Watson’s knowledge base?”  “Can Watson hear?” “Does he have to press a button like the human contestants?” and many more.

I was there as a subject matter expert and even though the spotlight was rightfully on Eric, I did get to answer a question on how some of Watson’s technology was being used today.  I explained how our IBM Content Analytics is used and how it is helping to power Watson’s natural language prowess.

When Watson incorrectly answered “What is Toronto????” in Final Jeopardy, the room audibly gasped (myself included).  As everyone seemed to hold their breath, I looked at Erik and he was smiling like a Cheshire cat … brimming with confidence.  The room cheered and applauded when the Watson’s small bet was revealed … a seeming acknowledgement to the technological brilliance.  Applause for a wrong answer!

Afterwards, there were many ideas on how Watson could be applied.  My favorite was from a legal industry colleague who had a number of suggestions for how Watson could optimize document review and analysis that is currently a problem for judges and litigators.

Yesterday (below) I said the human’s have a slight advantage.  And while Watson has built an impressive lead, I still feel that way.  Many of yesterday’s categories played to Watson’s fact based strengths.  It could go the other way tonight and Brad and Ken could get right back into the match.  The second game will air tonight in its entirety and the scores from both games will be combined to determine the $1 million prize winner.  Watson is entering tonight with a more than $25,000 lead.  IBM is donating all prize winnings to charity and Ken Jennings and Brad Rutter are donating 50% to charity.

DAY 1 POST:  After Day 1, Watson is tied with Brad Rutter at $5,000 going into Double Jeopardy – which is pretty impressive.  Ken Jennings has yet to catch his stride.  Brad and Ken seemed a little shell shocked at first, but Brad rebounded right when Watson was faltering towards the end of the first round.  This got me to thinking I should go into a little more detail about who really has the advantage … Watson or the humans? 

If you watched it last night, you may have observed that Watson does very well with factual questions.  He did very well in the Beatles song category – they were mostly facts with contextual references to lyrics.  Answers that involve multiple facts, all of which are required to answer the correct response but are unlikely to be found the same place, are much harder for Watson.  This is why Watson missed the Harry Potter question involving Lord Voldemort.  Watson also switched categories frequently which is part of his game strategy.  You may have also noticed that Watson can’t see or hear.  He answered a question wrong even though Ken gave the same wrong answer seconds before.  More on this later in the post.

Here goes … my take on who has the advantage …

Question Understanding :  Advantage Humans

Humans:  Seemingly Effortless.  Almost instantly knows what is being asked, what is important and how it applies – very naturally gets focus, references, hints, puns, implications, etc.

Watson:  Hugely Challenging.  Has to be programmed to analyze enormous numbers of possibilities to get just a hint of the relevant meaning.  Very difficult due to variability, implicit context, ambiguityof structure and meaning in language.

Language Understanding:  Advantage Humans

Humans:  Seemingly Effortless.  Powerful, general, deep and fast in understanding language – reading, experiencing, summarizing, storing knowledge in natural language.  This information is written for human consumption so reading and understanding what it says is natural for humans.

Watson:  Hugely Challenging.  Answers need to be determined and justified in natural language sources like news articles, reference texts, plays, novels, etc.  Watson must be carefully programmed and automatically trained to deeply analyze even just tiny subsets of language effectively.  Very different from web search, must find a precise answer and understand enough of what it read to know if and why a possible answer may be correct.

Self‐Knowledge (Confidence):  Advantage Humans

Humans:  Seemingly Effortless.  Most often, and almost instantly, humans know if they know the answer.

Watson:  Hugely Challenging.  1000’s of algorithms run in parallel to find and analyze 1000’s of written texts for many different types of evidence.  The results are combined, scored and weighed for their relative importance – how much they justify a candidate answer.  This has to happen in 3 seconds to compute a confidence and decide whether or not to ring-in before it is too late.

Breadth of Knowledge:  Advantage Humans

Humans:  Limited by self-contained memory.  Estimates of >1000’s of terabytes are all much higher than Watson’s memory capacity.  Ability to flexibly understand and summarize human relevance means that humans’ raw input capacity is even higher.

Watson:  Limited by self‐contained memory.  Roughly 1 Million books worth of content stored and processed in 15 Terabytes of working memory.  Weaker ability to meaningfully understand, relate and summarize human‐relevant content.  Must look at lots of data to compute statistical relevance.

Processing Speed:  Advantage Humans

Humans:  Fast Accurate Language Processing.  Native, strong, fast, language abilities.  Highly associative, highly flexible memory and speedy recall.  Very fast to speed read clue, accurately grasp question, determine confidence and answer – in just seconds. 

Watson:  Hugely Challenging.  On 1 CPU Watson can take over 2 hours to answer to a typical Jeopardy! question.  Watson must be parallelized, perhaps in ways similar to the brain, to simultaneously use 1000’s of compute cores to compete against humans in the 3-5 second range.

Reaction Speed:  Toss-up

Humans:  Times the Buzz.  Slower raw reaction speed but potentially faster to the buzz.  Listens to clue and anticipates when to buzz in.  “Timing the buzz” like this providing humans with the fastest absolute possible response time.

Watson:  Fast Hand.  More consistently deliver’s a fast reaction time but ONLY IF and WHEN can determine high enough confidence in time to buzz‐in.  Not able to anticipate when to buzz‐in based on listening to clue, which gives fastest possible response time to humans.  Also has to press same mechanical button as humans do.

Compute Power:  Won’t Impact Outcome

Humans:  Requires 1 brain that fits in a shoebox, can run on a tuna‐fish sandwich and be cooled with a hand‐held paper fan.

Watson:  Hugely Challenging.  Needs 2,880 compute cores (10 refrigerators worth in size and space) requiring about 80Kw of power and 20 tons of cooling.

Betting and Strategy:  Advantage Watson

Humans:  Slower, typically less precise.  Uses strategy and adjusts based on situation and game position.

Watson: Faster, more accurate calculations.  Uses strategy and adjusts based on situation and game position.

Emotions:  Advantage Watson

Humans:  Yes. Can slow down and /or confuse processing.

Watson:  No. Does NOT get nervous, tired, upset or psyched out (but the Watson programming team does!).

In-Game Learning:  Advantage Humans

Humans:  Learn very quickly from context, voice expression and (mostly importantly) right and wrong answers.

Watson:  Watson does not have the ability to hear (speech to text).  It is my understanding that Watson is “fed” the correct answer (in text) after each question so he can learn about the category even if he gets it wrong or does not answer.  However, I don’t believe he is “fed” the wrong answers though.  This is a disadvantage for Watson.  As seen last night, it is not uncommon for him to answer with the same wrong answer as another contestant.  This also happened in the sparring rounds leading up to the taping of last nights show.

As you can see things are closely matched but a slight advantage has to go to Ken and Brad.

And what about Watson’s face?

Another observation I made was how cool Watson’s avatar was.  It actually expresses what he is thinking (or processing).  The Watson avatar shares the graphic structure and tonality of the IBM Smarter Planet marketing campaign; a global map projection with a halo of “thought rays.”  The avatar features dozens of differentiated animation states that mirror the many stages of Jeopardy! gameplay – from choosing categories and answering clues, to winning and losing, to making Daily Double wagers and playing Final Jeopardy!.  Even Watson’s level of confidence – the numeric threshold that determines whether or not Watson will buzz in to answer – is made visible.  Watson’s stage presence is designed to depict the interior processes of the advanced computing system that powers it.  A significant portion of the avatar consists of colored threads orbiting around a central core.  The threads and thought rays that make up Watson’s avatar change color and speed depending on what happens during the game.  For example, when Watson feels confident in an answer the rays on the avatar turn green; they turn orange when Watson gets the answer wrong.  You will see the avatar speed up and activate when Watson’s algorithms are working hard to answer a clue.

I’ll be glued to the TV tonight and tomorrow.  Regardless of the outcome, this whole experience has been fascinating to me … so much so that I just published a new podcast on ECM, Content Analytics and Watson.

You can also visit my previous blog postings on Watson at: IBM at 100:  A Computer Called Watson“What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

IBM at 100: A Computer Called Watson

Watson is an efficient analytical engine that pulls many sources of data together in real-time, leverages natural language processing, discovers an insight, and deciphers a degree of confidence.

In my continuing series of IBM at 100 achievements, I saved the Watson achievement posting for today. In an historic event beginning tonight, in February 2011 IBM’s Watson computer will compete on Jeopardy! against the TV quiz show’s two biggest all-time champions. Watson is a supercomputer running software called DeepQA, developed by IBM Research. While the grand challenge driving the project is to win on Jeopardy!, the broader goal of Watson was to create a new generation of technology that can find answers in unstructured data more effectively than standard search technology.

Watson does a remarkable job of understanding a tricky question and finding the best answer. IBM’s scientists have been quick to say that Watson does not actually think. “The goal is not to model the human brain,” said David Ferrucci, who spent 15 years working at IBM Research on natural language problems and finding answers amid unstructured information. “The goal is to build a computer that can be more effective in understanding and interacting in natural language, but not necessarily the same way humans do it.”

Computers have never been good at finding answers. Search engines don’t answer a question–they deliver thousands of search results that match keywords. University researchers and company engineers have long worked on question answering software, but the very best could only comprehend and answer simple, straightforward questions (How many Oscars did Elizabeth Taylor win?) and would typically still get them wrong nearly one third of the time. That wasn’t good enough to be useful, much less beat Jeopardy! champions.

The questions on this show are full of subtlety, puns and wordplay—the sorts of things that delight humans but choke computers. “What is The Black Death of a Salesman?” is the correct response to the Jeopardy! clue, “Colorful fourteenth century plague that became a hit play by Arthur Miller.” The only way to get to that answer is to put together pieces of information from various sources, because the exact answer is not likely to be written anywhere.

Watson leverages IBM Content Analytics for part of the natural language processing. Watson runs on a cluster of PowerPC 750™ computers—ten racks holding 90 servers, for a total of 2880 processor cores. It’s really a room lined with black cabinets stuffed with hundreds of thousands of processors plus storage systems that can hold the equivalent of about one million books worth of information. Over a period of years, Watson was fed mountains of information, including text from commercial sources, such as the World Book Encyclopedia, and sources that allow open copying of their content, such as Wikipedia and books from Project Gutenberg.  Learn more about the technology under the covers on my previous posting 10 Things You Need to Know About the Technology Behind Watson.

When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.

By late 2010, in practice games at IBM Research in Yorktown Heights, N.Y., Watson was good enough at finding the correct answers to win about 70 percent of games against former Jeopardy! champions. Then in early 2011, Watson went up against Jeopardy! superstars Ken Jennings and Brad Rutter.

Watson’s question-answering technology is expected to evolve into a commercial product. “I want to create something that I can take into every other retail industry, in the transportation industry, you name it,” John Kelly, who runs IBM Research, told The New York Times. “Any place where time is critical and you need to get advanced state-of-the-art information to the front decision-makers. Computers need to go from just being back-office calculating machines to improving the intelligence of people making decisions.”

When you’re looking for an answer to a question, where do you turn? If you’re like most people these days, you go to a computer, phone or mobile device, and type your question into a search engine. You’re rewarded with a list of links to websites where you might find your answer. If that doesn’t work, you revise your search terms until able to find the answer. We’ve come a long way since the time of phone calls and visits to the library to find answers.

But what if you could just ask your computer the question, and get an actual answer rather than a list of documents or websites? Question answering (QA) computing systems are being developed to understand simple questions posed in natural language, and provide the answers in textual form. You ask “What is the capital of Russia?” The computer answers “Moscow,” based on the information that has been loaded into it.

IBM is taking this one step further, developing the Watson computer to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately demonstrate confidence to deliver precise final answers. Because of its deeper understanding of language, it can process and answer more complex questions that include puns, irony and riddles common in natural language. On February 14–16, 2010, IBM’s Watson computer will be put to the test, competing in three episodes of Jeopardy! against the two most successful players in the quiz show’s history: Ken Jennings and Brad Rutter.

The full text of this article can be found on IBM at 100: http://www.ibm.com/ibm100/us/en/icons/watson/

As for me … I am anxiously waiting to see what happens starting tonight.  See my previous blog postings on Watson at:  “What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

Good luck tonight to Watson, Ken Jennings and Brad Rutter … may the best man win (so to speak)!

Introducing IBM at 100: Patents and Innovation

With the looming Jeopardy! challenge competition involving IBM Watson, I am feeling proud of my association with IBM.  In part because IBM is an icon of business.  As a tribute, I plan to re-post a few of the notable achievements by IBM and IBMers from the past 100 years as an attempt to put the company’s contributions years into perspective.   Has IBM made a difference on our world … our planet?  What kind of impact has IBM had on the world?  Is it really a smarter planet as a result of the past 100 years?

I hope to answer these and other questions through these posts.  A dedicated website has these postings and much more about IBM’s past 100 years.   There is also a great overview video.  Check back often.  New stories will be added throughout the centennial year.  Let’s start with Patents and Innovation … a cornerstone of IBM’s heritage and reputation.

IBM’s 100 Icons of Progress

In the span of a century, IBM has evolved from a small business that made scales, time clocks and tabulating machines to a globally integrated enterprise with 400,000 employees and a strong vision for the future. The stories that have emerged throughout our history are complex tales of big risks, lessons learned and discoveries that have transformed the way we work and live. These 100 iconic moments—these Icons of Progress—demonstrate our faith in science, our pursuit of knowledge and our belief that together we can make the world work better.

Patents and Innovation

By hiring engineer and inventor James W. Bryce in 1917, Thomas Watson Sr. showed his commitment to pure inventing. Bryce and his team established IBM as a long-term leader in the development and protection of intellectual property. By 1929, 90 percent of IBM’s products were the result of Watson’s investments in R&D. In 1940, the team invented a method for adding and subtracting using vacuum tubes—a basic building block of the fully electronic computers that transformed business in the1950s. This pattern—using innovation to create intellectual property—shaped IBM’s history.

On January 26, 1939, James W. Bryce, IBM’s chief engineer, dictated a two-page letter to Thomas J. Watson, Sr., the company’s president. It was an update on the research and patents he had been working on. Today, the remarkable letter serves as a window into IBM’s long-held role as a leader in the development and protection of intellectual property.

Bryce was one of the most prolific inventors in American history, racking up more than 500 U.S. and foreign patents by the end of his career. In his letter to Watson, he described six projects, each of which would be considered a signature life achievement for the average person. They included research into magnetic recording of data, an investigation into the use of light rays in computing and plans with Harvard University for what would become one of the first digital computers. But another project was perhaps most significant. Wrote Bryce: “We have been carrying on an investigation in connection with the development of computing devices which do not employ the usual adding wheels, but instead use electronic effects and employ tubes similar to those used in radio work.”

The investigation bore fruit. On January 15, 1940, Arthur H. Dickinson, Bryce’s top associate and a world-beating inventor in his own right, submitted an application for a patent for “certain improvements in accounting apparatus.” In fact, the patent represented a turning point in computing history. Dickinson, under Bryce’s supervision, had invented a method for adding and subtracting using vacuum tubes—a basic building block of the fully electronic computers that began to appear in the 1940s and transformed the world of business in the 1950s.

This pattern—using innovation to create intellectual property—is evident throughout IBM’s history. Indeed, intellectual property has been strategically important at IBM since before it was IBM.

The full text of this article can be found on IBM at 100: http://www.ibm.com/ibm100/us/en/icons/patents/

It’s Back to the Future, Not Crossing the Chasm When it Comes to AIIM’s “Systems of Record” and “Systems of Engagement”

Pardon the interruption from the recent Information Lifecycle Governance theme of my postings but I felt the need to comment on this topic.  I even had to break out my flux capacitor for this posting to remind me as I was certain I had seen this before.

Recently at the ARMA Conference and currently in the AIIM Community at large, there is a flood of panels, webinars, blog postings and tweets on a “new” idea from Geoffrey Moore (noted author and futurist) differentiating “Systems of Record” from “Systems of Engagement.” This idea results from a project at AIIM where Geoffrey Moore was hired as a consultant to give the ECM industry a new identity among other things. One of the drivers of the project has been the emergence and impact of social media on ECM. The new viewpoint being advocated is that there is a new and revolutionary wave of spending emerging on “Systems of Engagement” – a wave focused directly on knowledge worker effectiveness and productivity.

Let me start by saying that I am in full agreement with the premise behind the idea that there are separate “Systems of Record” and “Systems of Engagement.” I am also a big fan of Geoffrey Moore. I’ve read most of his books and have drank the Chasm, Bowling Alley, Tornado and Gorilla flavors of his Kool-Aid. In fact, Crossing the Chasm is mandatory reading on my staff.

Most of the work from the AIIM project involving Moore has been forward thinking, logical and on target. However, this particular outcome does not sit well with me. My issue isn’t whether Moore and AIIM are right or wrong (they are right). My issue is that this concept isn’t a new idea. At best, Geoffrey has come up with a clever new label. The concept of “System of Record” is nothing new and a “System of Engagement” is a catchy way of referring to those social media systems that make it easier to create, use, and interact with content.

Here is where AIIM and Moore are missing the point. Social Media is just the most recent, not the first “System of Engagement.” Like those before it, these previous engagement systems were not capable of also being “Systems of Record” … so we need both … we’ve always needed both. It’s been this way for years. Apparently though, we needed a new label as everyone seems to have jumped on the bandwagon except me.

Let me point out some of the other “Systems of Engagement” over the years. For years, we’ve all been using something called Lotus Notes and/or Microsoft Exchange as a primary system to engage with our inner and outer worlds. This engagement format is called email … you may have heard of it. Kidding aside, we use email socially and always have. We use email to engage with others. We use email as a substitute for content management. Ever send an email confirming a lunch date? Ever communicate project details in the body of an email? Ever keep your documents in your email system as attachments so you know where they are? You get the idea. Email is not exactly a newfangled idea and no one can claim these same email systems also serve any legitimate record keeping purpose. There is enough case law and standards to fill a warehouse on that point (pardon the paper pun). More recently, instant messaging has even supplanted email for some of those same purposes especially as a way to quickly engage and collaborate to resolve issues. No one is confused about the purpose of instant messaging systems. It can even be argued that certain structured business systems like SAP are used in the same model when coupled with ECM to manage key business processes such as accounts payable. The point being, you engage in one place and keep records or content in another place. Use the tool best suited to the purpose.

Using technology like email and instant messaging to engage with, collaborate and communicate on content related topics with people is not a new idea. Social media is just the next thing in the same model. On one hand, giving social media and collaboration systems a proper label is a good thing. On the other hand, give me a break … any Records Manager doing electronic records embraced the concept of “record making applications” and “record keeping systems” a long time ago. It’s a long standing proven model for managing information. Let’s call it what it is.

I applaud AIIM and Moore for putting this idea out there but I also think they have both missed the mark. “Systems of Engagement” is a bigger, different and proven idea than how both currently talking about it. Maybe I am Luddite, but this seems to me like this simply a proven idea that got a fresh coat of paint.

As AIIM and Moore use words like “revolution” and “profound implications” in their promotional materials I think I’ll break out my Back to the Future DVD and stay a little more grounded.  Like a beloved old movie, I am still a fan of both Moore and AIIM.  However, I recommend you see this particular movie for yourself and try to separate the hype from the idea itself.  If you do, let me know whether you agree … is this an original idea or a simply a movie sequel?