ECM Systems: Is Yours A Five Tool Player?

I grew up in Baltimore and baseball was my sport. I played Wiffle Ball in my backyard and Little League with my friends. It was all we ever talked and thought about. I played on all-star teams, destroyed my knees catching and worshipped the Orioles. And while I think Billy Beane’s use of analytics in “Moneyball” was absolute genius (read the book) … every good Orioles fan knows that starting pitching and three run homers wins baseball games … at least according to the Earl of Baltimore (sorry for the obscure Earl Weaver reference).

Brooks Robinson (Mr. Hoover) was my favorite player (only the greatest 3rd baseman of all time). I still have an autographed baseball he signed for me, as a kid, on prominent display in my office. I stood in line at the local Crown gas station for several hours with my Dad to get that ball.

But alas, baseball has fallen on hard times in Baltimore and even I had drifted away from the game. Good ole Brooksie was a fond nostalgic memory for me until the other day. This posting is not about baseball … it’s about ECM … really it is.

The recently concluded World Series is one of the most remarkable ever played. The late inning heroics in game six were amazing. Though neither team would give up, one had to prevail. Watching the end of that game got me thinking about ECM … no, really!

Baseball is a game that transfixes you when the ball is put into play … or in motion. And quite frankly, the game is pretty boring in between the action … or when things are at rest. So much so that the game is almost unwatchable unless things are in motion. The game comes alive with the tag-up on a sacrifice fly … or the stolen base … or a runner stretching a single into a double … or best of all, the inside-the-park homer. What do they all have in common? Action! Excitement! Motion!

No one care really cares what happens between the pitches. Everyone wants the action. That’s why you pay the ticket price … to sit on the edge of your seat and wait for ball to be put into play. The same is true for your enterprise content. It’s much more valuable when you put it into play … or in action. Letting your content sit idle is just driving up your costs (and risks too). Your goal should be to put it in motion. I recently wrote about this with Content at Rest or Content in Motion? Which is Better?.

However … putting your content in motion requires having the right tools. In baseball, the most coveted players are five tool players. They hit for average, hit for power, have base running skills (with speed), throwing ability, and fielding abilities.

The best ECM systems are also five tool players. They have five key capabilities. If you want the maximum value from your content, your ECM system must be able to:

1) Capture and manage content

2) Socialize content with communities of interest

3) Govern the lifecycle of content

4) Activate content through case centric processes

5) Analyze and understand content

I was lucky enough to have recently been interviewed by Wes Simonds who wrote a nice piece on these same five areas of value for ECM. These five tools are coveted, just like baseball. Why? Think about it … no one buys an ECM system unless they want to put their content in motion in one way or another.

Here’s the rub … far too often I see ECM practitioners who are only using one, or two, or maybe three, of their ECM capabilities even though they could be doing more. Why is this? It’s like being happy with being a .220 average hitter in baseball (or a one or two tool player). No one is getting a fat contract or going to the Hall of Fame by hitting .220 and just keeping your head above the Mendoza line (another obscure baseball reference). Like in baseball, you need to use all five skills to get to the big contracts … or get the maximum value from your ECM based information.

Brooks Robinson didn’t win a record 16 straight Gold Gloves, the Most Valuable Player Award or play in 18 consecutive All Star games because he had one or two skills. He was named to the All Century team and elected to the Hall of Fame on the first ballot with a landslide 92% of the votes because he put the ball in motion and made the most of the skills and tools he had.

It’s simple … those new to ECM should only consider systems with all five capabilities.

And today’s existing ECM practitioners should be promoting, using and benefiting from all five tools, not just a few. Putting content in motion with all five tools benefits your career and maximizes your ECM program. It enables your organization get the maximum value from the 80% of your data that is unstructured content.

As always, leave your thoughts and comments here.

TV Re-runs, Watson and My Blog

When I was a wee lad … back in the 60s … I used to rush home from elementary school to watch the re-runs on TV.  This was long before middle school and girls.  HOMEWORK, SCHMOMEWORK !!!  … I just had to see those re-runs before anything else.  My favorites were I Love Lucy, Batman, Leave It To Beaver and The Munsters.  I also watched The Patty Duke Show (big time school boy crush) but my male ego prevents me from admitting I liked it.  Did you know the invention of the re-run is credited to Desi Arnaz?  The man was a genius even though Batman was always my favorite.  Still is.  I had my priorities straight even back then.

I am reminded of this because I have that same Batman-like re-run giddiness as I think about the upcoming re-runs of Jeopardy! currently scheduled to air September 12th – 14th.

You’ve probably figured out why I am so excited, but in case you’ve been living in a cave, not reading this blog, or both … IBM Watson competed (and won) on Jeopardy! in February against the two most accomplished Grand Champions in the history of the game show (Ken Jennings and Brad Rutter).  Watson (DeepQA) is the world’s most advanced question answering machine that uncovers answers by understanding the meaning buried in the context of a natural language question.  By combining advanced Natural Language Processing (NLP) and DeepQA automatic question answering technology, IBM was able to demonstrate a major breakthrough in computing.

Unlike traditional structured data, human natural language is full of ambiguity … it is nuanced and filled with contextual references.  Subtle meaning, irony, riddles, acronyms, idioms, abbreviations and other language complexities all present unique computing challenges not found with structured data.  This is precisely why IBM chose Jeopardy! as a way to showcase the Watson breakthrough.

Appropriately, I’ve decided that this posting should be a re-run of my own Watson and content analysis related postings.  So in the sprit of Desi, Lucy, Batman and Patty Duke … here we go:

  1. This is my favorite post of the bunch.  It explains how the same technology used to play Jeopardy! can give you better business insight today.  “What is Content Analytics?, Alex”
  2. I originally wrote this a few weeks before the first match was aired to explain some of the more interesting aspects of Watson.  10 Things You Need to Know About the Technology Behind Watson
  3. I wrote this posting just before the three day match was aired live (in February) and updated it with comments each day.  Humans vs. Watson (Programmed by Humans): Who Has The Advantage?
  4. Watson will be a big part of the future of Enterprise Content Management and I wrote this one in support of a keynote I delivered at the AIIM Conference.   Watson and The Future of ECM  (my slides from the same keynote are posted here).
  5. This was my most recent posting.  It covers another major IBM Research advancement in the same content analysis technology space.  TAKMI and Watson were recognized as part of IBM’s Centennial as two of the top 100 innovations of the last 100 years.  IBM at 100: TAKMI, Bringing Order to Unstructured Data
  6. I wrote a similar IBM Centennial posting about IBM Research and Watson.  IBM at 100: A Computer Called Watson
  7. This was my first Watson related post.  It introduced Watson and was posted before the first match was aired.  Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

Desi Arnaz may have been a genius when it came to TV re-runs but the gang at IBM Research have made a compelling statement about the future of computing.  Jeopardy! shows what is possible and my blog postings show how this can be applied already.  The comments from your peers on these postings are interesting to read as well.

Don’t miss either re-broadcast.  Find out where and when Jeopardy! will be aired in your area.  After the TV re-broadcast, I will be doing some events including customer and public presentations.

On the web …

  • I will presenting IBM Watson and the Future of Enterprise Content Management on September 21, 2011 (replay here).
  • I will be speaking on Content Analytics in a free upcoming AIIM UK webinar on September 30, 2011 (replay here).

Or in person …

You might also want to check out the new Smarter Planet interview with Manoj Saxena (IBM Watson Solutions General Manager)

As always, your comments and thoughts are welcome here.

A 124 Year Odyssey Involving Cases and Records Finally Ends

I first became aware of this matter about 10 years ago when I read a story about a woman named Josephine Wild Gun (yes, that is her name) who then lived in a small run-down house on the Blackfeet reservation in Montana. Like most of her Native American neighbors, she owned several parcels of reservation land that were being held in trust by the U.S. Government (Indian Trust Fund).  The Indian Trust Fund was created in 1887, as part of the Dawes Act, to oversee payments to Native Americans.  This fund managed nearly 10,000 acres on Josephine’s behalf, leasing the property to private interests for grazing and oil drilling fees.  In return, she was supposed to receive royalties from the trust fund.

Despite the lucrative leases, Josephine had allegedly never received more than $1,500 a year from the trust fund.  According to the story, the payments trickled off and one check totaled only 87 cents.  When her husband died, she even had to borrow money to pay for the funeral.  Josephine’s story is compelling … and it stuck with me.   This story, along with some research I was doing on the Cobell v. Salazar lawsuit (involving the same Indian Trust Fund) and the government’s inability to produce records documenting the income accounting of the payments to Josephine and about 300,000 other Native Americans, caused me to wonder how and why something like this could happen.

The 15-year old class action (Cobell v. Salazar) lawsuit was recently settled for $3.4 billion.  I am writing about this today because hundreds of thousands of notices went out this week to American Indians who are affected by the $3.4 billion settlement bringing an end to a 124 year odyssey involving The Department of the Interior, The Bureau of Indian Affairs and many Native Americans and their descendants.  In this suit, Elouise Cobell (a Native American and member of the Blackfeet tribe) sued the federal government over the mismanagement of the trust fund.  In her suit, Cobell claimed that the U.S. Government failed to provide a historical accounting of the money the government held in trust for Native American landowners in exchange for the leasing of tribal lands.  Ultimately, the case hinged on the government’s ability to produce these accounting records showing how the money was managed on behalf of the original landowners.  I find myself wondering if the whole entire thing could have been avoided with better case management and recordkeeping practices.  This 15-year court battle is the culmination of events going all the way back to the 19th Century!  The landowners had a right to expect proper case management, proper records management and proper distribution of funds.  Apparently, none of those things happened.

As a history buff, I find the whole back story fascinating … so here we go …

It all starts with Henry Dawes (1816 – 1903) who was a Yale graduate from Massachusetts.  He was an educator, a newspaper editor, a lawyer and perhaps, somewhat infamously, a Congressman who was both a member of the U.S. House of Representatives (1857 to 1875) and the U.S. Senate (1875 to 1893).

During his time in public service, he had his ups and his downs.  In 1868, he received a large number of shares of stock from a railroad construction company as part of the Union Pacific railway’s influence-buying efforts.  On the positive side, Dawes was both a supporter and involved with the creation of Yellowstone National Park.  He also had a role in promoting anti-slavery and reconstruction measures during and after the Civil War.  In the Senate, he was chairman of The Committee on Indian affairs, where he concentrated on the enactment of laws that he believed were for the benefit of American Indians.

Dawes’s most noteworthy achievement was the passage of The General Allotment Act of 1887 (known as The Dawes Act referenced earlier).  The Dawes Act authorized the government to survey and inventory Indian tribal land and to divide the area into allotments for individual Indians.  Although later amended twice, it was this piece of legislation that set the stage for 124 years of alleged mismanagement and eventually the Cobell v. Salazar lawsuit.

I see this as a cautionary tale … reminding us of the need for enterprise content and case management as well as records management (but more on that later).  I wasn’t around but I would imagine PC’s ran pretty slowly back in 1887 (chuckle) … but I digress, as manual paper based practices did exist.

Back to the story … The Dawes Commission, was established under the Office of Indian Affairs to persuade American Indians to agree to the allotment plan.   Dawes himself, later oversaw the commission for a period of time after his time as a Senator.  It was this same commission that registered and documented the members of the Five Civilized Tribes.  Eventually, The Curtis Act of 1898 abolished tribal jurisdiction over the tribes’ land and the landowners became dependent on the government.  Native Americans lost about 90 million acres of treaty land, or about two-thirds of the 1887 land base over the lifespan of the Dawes Act.  Roughly 90,000 Indians were made landless and the Act forced Native people onto small tracts of land … in many cases, it separated families.  The allotment policy depleted the land base and also ended hunting as a means of subsistence.  In 1928, a Calvin Coolidge Administration study had determined that The Dawes Act had been used to illegally deprive Native Americans of their land rights.  Today, The United States Department of the Interior is responsible for the remnants of The Dawes Act and the Office of Indian Affairs is now known as the Bureau of Indian Affairs.

There is a pretty big taxpayer bill about to finally be paid out ($3.4 billion) to the surviving Native American descendants and for other purposes.  Throughout the lifecycle of this case, there were multiple contempt charges, fines and embarrassing mandates resulting in the government’s reputation taking a significant hit.  Interior Secretary Bruce Babbitt and Treasury Secretary Robert Rubin were found in contempt of court for failing to produce documents and slapped with a $625,000 fine.  And while time went by and Administrations changed, not much else did when Interior Secretary Gale Norton and Assistant Interior Secretary of Indian Affairs Neal McCaleb were also held in contempt.  At one point, the judge also ordered the Interior Department to shut down most of its Internet operations after an investigator discovered that the department’s computer system allowed unauthorized access to Indian trust accounts.  During this time, many federal employees could not receive or respond to emails, and thousands of visitors to national parks were unable to make online reservations for campsites.  The shutdown also prevented the trust fund from making payments to more than 43,000 Indians, many of whom depended on the quarterly checks to make ends meet. In Montana and Wyoming, some beneficiaries were forced to apply for tribal loans to help them through the holidays.

There was plenty of mudslinging as well:

“Federal officials have spent more than 100 years mismanaging, diverting, and losing money that belongs to Indians,” says John Echohawk of the Native American Rights Fund, which directed the lawsuit.  “They have no idea how much has been collected from the companies that use our land and are unable to provide even a basic, regular statement to most Indian account holders.”

Again I ask … where was the accountability for these landowner cases and the associated records?  Could all of this have been prevented with better policies and processes?

The damage was already done but we know that the government invested in an array of systems such as Integrated Records Management System (IRMS), Trust Funds Accounting System (TFAS), Land Records Information System (LRIS) and Trust Asset and Accounting Management System (TAAMS).  These systems were to collect, manage and distribute trust funds in support of the 1994 Indian Trust Fund Management Reform Act.  They were used for historical accounting purposes and contained land ownership records and financial records for the associated cases.  A major premise of the government’s accounting effort was that the transition from paper to electronic records took the accuracy, completeness and reliability of the trust data to a level that far surpassed the “paper ledger era” … seems like it was too little too late.

I guess we’ll never know for sure, but I firmly believe that much, if not most, of this could have been avoided.  It was alleged during the case that as much 90 percent of the Indian Trust Fund’s records were missing, and the few that were available were in comically bad condition. An Interior Department report provided to the court refers to storage facilities plagued by problems ranging from “poisonous spiders in the vicinity of stored records” to “mixed records strewn throughout the room with heavy rodent activity.”

It’s a tragic story and I am glad it’s finally ending.  It’s disheartening that Josephine Wild Gun and many others had to suffer the way they did for the past 124 years.  It’s amazing the number of people that this impacted starting with Henry Dawes and ending with ~300,000 Native Americans (and everyone in between).  It’s encouraging to know that technologies like Enterprise Content Management, Advanced Case Management and Records Management can all be used with great impact in the future to improve processes and outcomes like this.

As always, leave me your thoughts and opinions here.

IBM at 100: TAKMI, Bringing Order to Unstructured Data

As most of you know … I have been periodically posting some of the really fascinating top 100 innovations of the past 100 years as part of IBM’s Centennial celebration.

This one is special to me as it represents what is possible for the future of ECM.  I wasn’t around for tabulating machines and punch cards but have long been fascinated by the technology developments in the management and use of content.  As impressive as Watson is … it is only the most recent step in a long journey IBM has been pursuing to help computers better understood natural language and unstructured information.

As most of you probably don’t know … this journey started over 50 years ago in 1957 when IBM published the first research on this subject entitled A Statistical Approach to Mechanized Encoding and Searching of Literary InformationFinally … something in this industry older then I am!

Unstructured Information Management Architecture (UIMA)

Another key breakthrough by IBM in this area was the invention of UIMA.  Now an Apache Open Source project and OASIS standard, UIMA is an open, industrial-strength platform for unstructured information analysis and search.  It is the only open standard for text based processing and applications.  I plan to write more on UIMA in a future blog but I mention it here because it was an important step forward for the industry, Watson and TAKMI (now known as IBM Content Analytics).

TAKMI

In 1997, IBM researchers at the company’s Tokyo Research Laboratory pioneered a prototype for a powerful new tool capable of analyzing text. The system, known as TAKMI (for Text Analysis and Knowledge Mining), was a watershed development: for the first time, researchers could efficiently capture and utilize the wealth of buried knowledge residing in enormous volumes of text. The lead researcher was Tetsuya Nasukawa.

Over the past 100 years, IBM has had a lot of pretty important inventions but this one takes the cake for me.  Nasukawa-san once said,

“I didn’t invent TAKMI to do something humans could do, better.  I wanted TAKMI to do something that humans could not do.”

In other words, he wanted to invent something humans couldn’t see or do on their own … and isn’t that the whole point and value of technology anyway?

By 1997, text was searchable, if you knew what to look for. But the challenge was to understand what was inside these growing information volumes and know how to take advantage of the massive textual content that you could not read through and digest.

The development of TAKMI quietly set the stage for the coming transformation in business intelligence. Prior to 1997, the field of analytics dealt strictly with numerical and other “structured” data—the type of tagged information that is housed in fixed fields within databases, spreadsheets and other data collections, and that can be analyzed by standard statistical data mining methods.

The technological clout of TAKMI lay in its ability to read “unstructured” data—the data and metadata found in the words, grammar and other textual elements comprising everything from books, journals, text messages and emails, to health records and audio and video files. Analysts today estimate that 80 to 90 percent of any organization’s data is unstructured. And with the rising use of interactive web technologies, such as blogs and social media platforms, churning out ever-expanding volumes of content, that data is growing at a rate of 40 to 60 percent per year.

The key for the success was natural language processing (NLP) technology. Most of the data mining researchers were treating English text data as a bag of words by extracting words from character strings based on white spaces. However, since Japanese text data does not contain white spaces as word separators, IBM researchers in Tokyo applied NLP for extracting words, analyzing their grammatical features, and identifying relationships among words. Such in-depth analysis led to better results in text mining. That’s why the leading-edge text mining technology originated in Japan.

The complete article on TAKMI can be found at http://www.ibm.com/ibm100/us/en/icons/takmi/

Fast forward to today.  IBM has since commercialized TAKMI as IBM Content Analytics (ICA), a platform to derive rapid insight.  It can transform raw information into business insight quickly without building models or deploying complex systems enabling all knowledge workers to derive insight in hours or days … not weeks or months.  It helps address industry specific problems such as healthcare treatment effectiveness, fraud detection, product defect detection, public safety concerns, customer satisfaction and churn, crime and terrorism prevention and more.

I’d like to personally congratulate Nasukawa-san and the entire team behind TAKMI (and ICA) for such an amazing achievement … and for making the list.  Selected team members who contributed to TAKMI are Tetsuya Nasukawa, Kohichi Takeda, Hideo Watanabe, Shiho Ogino, Akiko Murakami, Hiroshi Kanayama, Hironori Takeuchi, Issei Yoshida, Yuta Tsuboi and Daisuke Takuma.

It’s a shining example of the best form of innovation … the kind that enables us to do something not previously possible.  Being recognized along with other amazing achievements like the UPC code, the floppy disk, magnetic stripe technology, laser eye surgery, the scanning tunneling microscope, fractal geometry, human genomics mapping is really amazing.

This type of enabling innovation is the future of Enterprise Content Management.  It will be fun and exciting to see if TAKMI (Content Analytics) has the same kind of impact on computing as the UPC code has had on retail shopping … or as laser eye surgery has had on vision care.

What do you think?  As always, leave for your thoughts and comments.

Other similar postings:

Watson and The Future of ECM

“What is Content Analytics?, Alex”

10 Things You Need to Know About the Technology Behind Watson

Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy! 

Content in Motion: The Voice of Your Customer

Do you listen to your customers?

No, really!  Of course, everyone answers “yes” when asked this question.  So much so … that the question really isn’t worth asking anymore.  The real question to ask is “What are you doing about it?”

Your customers write about your services, prices, product quality and their experiences with you in social media.  They write you letters (yes, letters on paper do exist), they send you emails, they call your call centers and even participate in surveys you conduct … Again I ask, what are you doing about it?

How are you translating all that information across all those input channels into action?  All of that content (you already have) in the form of customer interactions is just waiting to be leveraged (hhmmmm).

In three separate “C” Level studies (CIO, CFO, CEO) … the number one executive imperative was to “Reinvent Customer Relationships”.  Across the three studies, key findings were to:

  • Get closer to customers (top need)
  • Better understand what customers need
  • Deliver unprecedented customer service

Can anyone think of a better way to accomplish this then by examining all of that customer interaction based content to enable you to do something about it?  I bet there are loads of trends, patterns and new insights just waiting to be explored and discovered in those interactions … something demanding your attention and needing action.  This is one of the thoughts I had in mind when I blogged about “Content at Rest or Content in Motion? Which is Better?” a few weeks ago.  Clearly, identifying customer satisfaction trends about products, services and personnel is critical to any business.

The Hertz Corporation is doing this today.  They are using IBM Content Analytics software to examine customer interaction based content to better identify car and equipment rental performance levels for pinpointing and making the necessary adjustments to improve customer satisfaction levels.  Insights derived from enterprise content enable companies like Hertz to drive new marketing campaigns or modify their products and services to meet the demands of their customers.

“Hertz gathers an amazing amount of customer insight daily, including thousands of comments from web surveys, emails and text messages. We wanted to leverage this insight at both the strategic level and the local level to drive operational improvements,” said Joe Eckroth, Chief Information Officer, the Hertz Corporation.

Hertz isn’t just listening … they are taking action … by putting their content in motion.

Again I ask, what are you doing about it?  Why not test drive Hertz’s idea in your business?  You’ve already got the content to do so.

I welcome your input as always.  I recently bylined articles on Hertz and IBM Content Analytics for ibm.com and CIO.com entitled  “Insights into Action – Improving Service by Listening to the Voices of your Customers”.  For a more detailed profile on ICA at Hertz visit: http://www-03.ibm.com/press/us/en/pressrelease/32859.wss

IBM … 100 Years Later

Nearly all the companies our grandparents admired have disappeared.  Of the top 25 industrial corporations in the United States in 1900, only two remained on that list at the start of the 1960s.  And of the top 25 companies on the Fortune 500 in 1961, only six remain there today.  Some of the leaders of those companies that vanished were dealt a hand of bad luck.  Others made poor choices. But the demise of most came about because they were unable simultaneously to manage their business of the day and to build their business of tomorrow.

IBM was founded in 1911 as the Computing Tabulating Recording Corporation through a merger of four companies: the Tabulating Machine Company, the International Time Recording Company, the Computing Scale Corporation, and the Bundy Manufacturing Company.  CTR adopted the name International Business Machines in 1924.  The distinctive culture and product branding has given IBM the nickname Big Blue.

As you read this, IBM begins its 101st year.  As I look back at the last century, there is a path that led us to this remarkable anniversary which has been both rich and diverse.  The innovations IBM has contributed includes products ranging from cheese slicers to calculators to punch cards – all the way up to game-changing systems like Watson.

But what stands out to me is what has remained unchanged.  IBM has always been a company of brilliant problem-solvers.  IBMers use technology to solve business problems.  We invent it, we apply it to complex challenges, and we redefine industries along the way.

This has led to some truly game-changing innovation.  Just look at industries like retail, air travel, and government.  Where would we be without UPC codes, credit cards and ATM machines, SABRE, or Social Security?  Visit the IBM Centennial site to see profiles on 100 years of innovation.

We haven’t always been right though … remember OS/2, the PCjr and Prodigy?

100 years later, we’re still tackling the world’s most pressing problems.  It’s incredibly exciting to think about the ways we can apply today’s innovation – new information based systems leveraging analytics to create new solutions, like Watson – to fulfill the promise of a Smarter Planet through smarter traffic, water, energy, and healthcare.  This promise of the future … is incredibly exciting and I look forward to helping IBM pave the way for continued innovation.

Watch the IBM Centennial film “Wild Ducks” or read the book.  IBM officially released a book last week celebrating the Centennial, “Making the World Work Better: The Ideas that Shaped a Century and a Company”.  The book consists of three original essays by leading journalists. They explore how IBM” has pioneered the science of information, helped reinvent the modern corporation and changed the way the world actually works.

As for me … I’ve been with IBM since the 2006 acquisition of FileNet and am proud to be associated with such an innovative and remarkable company.

Content at Rest or Content in Motion? Which is Better?

I really wish I’d thought of this concept but I didn’t.  It’s such a simple idea when you think about it … that there are two fundamental types of content … enterprise content at rest and enterprise content in motion.

Content at Rest = Cost / Risk

Enterprise content at rest is sitting around just taking up space.  At rest implies not being accessed … not being used … not doing anything of value.  (Hhhmm … this sounds a lot like my Uncle Leo around the holidays.  I have this mental image of him asleep on my couch one Thanksgiving surrounded by several beer cans.  Sorry for sharing.)  Anyway … when at rest, this content usually includes duplicates and near-duplicates making the problem worse.  Content at rest drives significant and unnecessary costs in the form of storage, power, system administration and more. Worse yet, all this unnecessary content is ruining our search experiences.  We can’t find anything because of all this useless content just hanging around gumming up our search results.  Boy this sounds dumb.  Maybe we should start disposing of some of this stuff?

Content in Motion = Value / Reward

On the other hand, enterprise content in motion is highly valuable and rewarding.  Content that is part of business process or case management enabling better decisions and outcomes … or content that is community and social oriented driving better collaborative experiences and outcomes … or content that is being analyzed to unlock business insight across large amounts of unstructured data.  Sounds awesome, doesn’t it?  Let’s put all that content to work for us! (Reminds me of my Aunt Marge … who never stops cleaning, cooking, running errands taking care of the familiy critical stuff.  How Leo and Marge have stayed married all these years is beyond me).

What To Do

If we’re agreed that’s far more valuable to activate content, then how do we go about it? … and more importantly how do we pay for it?

Today, over 80% of most IT budgets are already allocated to managing existing “stuff” … programs, systems, storage including all that costly content at rest.  With information expected to grow 44 times by 2020,  This is a failure scenario.  IT budgets are flat or declining in most organizations so at current course and speed we’ll increasingly be spending 83%, 88%, 95% and eventually all of our IT budget on managing existing “stuff”.  This leaves very little or no money to invest in new ECM initiatives that drive value … like those that activate content and put content in motion.

And those who say … “but storage is always getting cheaper, so no big deal” should probably stop reading here because you won’t like what is coming next.  Storage may indeed be getting cheaper but the people, power, maintenance, physical space that it requires to work is not.  It’s was a dumb argument yesterday, a ridiculous one today and an untenable one going forward.  Most IT budgets already spend 17% on storage (yikes), which ought to be plenty.

Action Plan to Activate Content and Drive Value

Let’s just stop the madness and put much more focus, energy and budget on delivering value through content in motion!  Here are some basic steps you can take right now:

1. Think and act differently … it’s really about the communities, the processes and the insight related to your content.  Use and value comes from activity, not stagnance.

2. Defensibly dispose of everything you can, including retiring old content centric apps, abandoned SharePoint sites, unused file shares as soon as you can … except what you are obligated to keep for business, regulatory or legal purposes.  Big hint: This will free up loads of resources and budget that can be reallocated to new projects, like activating content.

3. Work with your line-of-business execs in three areas to activate content:

Case Management:  Automate and improve those workflows and processes that are case centric where people, process and content are essential to the outcome.  The more ad-hoc and exception oriented processes drive maximum content value … think claims processing, dispute resolution, customer inquiry, investigation, onboarding and more.

Responsible Social Content:  Enable a true social content experience for knowledge works where projects, activities, instant collaboration, tasks and ECM services are the norm.  Think Facebook + ECM for the enterprise … combine ECM, social software and a more responsible approach to content collaboration.

Content Analysis:  Leverage and exploit your content by understanding the trends, patterns, anomalies and deviations of your business that are currently trapped in your content. Think Business Intelligence for content and detect fraud, predict outcomes, find new opportunities, hear the voice-of-the-customer and more.

I think it’s obvious by now which is better … or in other words, we should all be more like my Aunt Marge and not  my Uncle Leo (he snores when he naps, too). 

Toby Bell (Gartner ECM analyst) made the “content at rest” and “content in motion” remarks which got me thinking along these lines.  I’ve taken Toby’s idea and added my own perspective.  I also discussed this concept at this weeks Managing Electronic Records Conference in Chicago and received positive feedback from the audience and a few individuals afterwards.

As always … leave me your thoughts and ideas here. I’ll be discussing this topic and other ECM topics at the upcoming Boston and Toronto UserNet events. Hope to see you there.

IBM at 100: SAGE, The First National Air Defense Network

This week was a reminder of how technology can aid in our nation’s defense as we struck a major blow against terrorism.  Most people don’t realize IBM contributed to our nation’s defense in the many ways it has.  Here is just one example from 1949.

When the Soviet Union detonated their first atomic bomb on August 29, 1949, the United States government concluded that it needed a real-time, state-of-the-art air defense system.  It turned to Massachusetts Institute of Technology (MIT), which in turn recruited companies and other organizations to design what would be an online system covering all of North America using many technologies, a number of which did not exist yet.  Could it be done?  It had to be done.  Such a system had to observe, evaluate and communicate incoming threats much the way a modern air traffic control system monitors flights of aircraft.

This marked the beginning of SAGE (Semi-Automatic Ground Environment), the national air defense system implemented by the United States to warn of and intercept airborne attacks during the Cold War.  The heart of this digital system—the AN/FSQ-7 computer—was developed, built and maintained by IBM.  SAGE was the largest computer project in the world during the 1950s and took IBM squarely into the new world of computing.  Between 1952 and 1955, it generated 80 percent of IBM’s revenues from computers, and by 1958, more than 7000 IBMers were involved in the project.  SAGE spun off a large number of technological innovations that IBM incorporated into other computer products.

IBM’s John McPherson led the early conversations with MIT, and senior management quickly realized that this could be one of the largest data processing opportunities since winning the Social Security bid in the mid-1930s.  Thomas Watson, Jr., then lobbying his father and other senior executives to move into the computer market quickly, recalled in his memoirs that he wanted to “pull out all the stops” to be a central player in the project.  “I worked harder to win that contract than I worked for any other sale in my life.”  So did a lot of other IBMers: engineers designing components, then the computer; sales staff pricing the equipment and negotiating contracts; senior management persuading MIT that IBM was the company to work with; other employees collaborating with scores of companies, academics and military personnel to get the project up and running; and yet others who installed, ran and maintained the IBM systems for SAGE for a quarter century.

The online features of the system demonstrated that a new world of computing was possible—and that, in the 1950s, IBM knew the most about this kind of data processing.  As the ability to develop reliable online systems became a reality, other government agencies and private companies began talking to IBM about possible online systems for them.  Some of those projects transpired in parallel, such as the development of the Semi-Automated Business Research Environment (Sabre), American Airlines’ online reservation system, also built using IBM staff located inPoughkeepsie,New York.

In 1952, MIT selected IBM to build the computer to be the heart of SAGE. MIT’s project leader, Jay W. Forrester, reported later that the company was chosen because “in the IBM organization we observed a much higher degree of purposefulness, integration and “esprit de corps” than in other firms, and “evidence of much closer ties between research, factory and field maintenance at IBM.”  The technical skills to do the job were also there, thanks to prior experience building advanced electronics for the military.

IBM quickly ramped up, assigning about 300 full-time IBMers to the project by the end of 1953. Work was centered in IBM’s Poughkeepsie and Kingston, NY facilities and in Cambridge, Massachusetts, home of MIT.  New memory systems were needed; MITRE and the Systems Development Corporation (part of RAND Corporation) wrote software, and other vendors supplied components.  In June 1956, IBM delivered the prototype of the computer to be used in SAGE.  The press release called it an “electronic brain.”  It could automatically calculate the most effective use of missiles and aircraft to fend off attack, while providing the military commander with a view of an air battle. Although this seems routine in today’s world, it was an enormous leap forward in computing.  When fully deployed in 1963, SAGE included 23 centers, each with its own AN/FSQ-7 system, which really consisted of two machines (one for backup), both operating in coordination.  Ultimately, 54 systems were installed, all collaborating with each other. The SAGE system remained in service until January 1984, when it was replaced with a next-generation air defense network.

Its innovative technological contributions to IBM and the IT industry as a whole were significant.  These included magnetic-core memories, which worked faster and held more data than earlier technologies; a real-time operating system (a first); highly disciplined programming methods; overlapping computing and I/O operations; real-time transmission of data over telephone lines; use of CRT terminals and light pens (a first); redundancy and backup methods and components; and the highest reliability of computer systems (uptime) of the day.  It was the first geographically distributed, online, real-time application of digital computers in the world.  Because many of the technological innovations spun off from this project were ported over to new IBM computers in the second half of the 1950s by the same engineers who had worked on SAGE, the company was quickly able to build on lessons learned in how to design, manufacture and maintain complex systems.

Fascinating to be sure … the full article can be accessed at http://www.ibm.com/ibm100/us/en/icons/sage/

IBM at 100: The 1401 Mainframe

In my continuing series of IBM at 100, I turn to our data processing heritage with the IBM 1401 Data Processing System (which was long before my time).

While the IBM 1401 Data Processing System wasn’t a great leap in power or speed, that was never the point. “It was a utilitarian device, but one that users had an irrational affection for,” wrote Paul E. Ceruzzi in his book, A History of Modern Computing.

There were several keys to the popularity of the 1401 system. It was one of the first computers to run completely on transistors—not vacuum tubes—and that made it smaller and more durable. It rented for US$2500 per month, and was touted as the first affordable general-purpose computer. It was also the easiest machine to program at the time. The system’s software, wrote Dag Spicer, senior curator at the Computer History Museum, “was a big improvement in usability.”

This more accessible computer unleashed pent-up demand for data processing. IBM was shocked to receive 5200 orders for the 1401 computer in just the first five weeks after introducing it—more than was predicted for the entire life of the machine. Soon, business functions at companies that had been immune to automation were taken over by computers. By the mid-1960s, more than 10,000 1401 systems were installed, making it by far the best-selling computer to date.

More importantly, it marked a new generation of computing architecture, causing business executives and government officials to think differently about computing. A computer didn’t have to be a monolithic machine for the elite. It could fit comfortably in a medium-size company or lab. In the world’s top corporations, different departments could have their own computers.

A computer could even wind up operating on an army truck in the middle of a forest. “There was not a very good grasp or visualization of the potential impact of computers—certainly as we know them today—until the 1401 came along,” said Chuck Branscomb, who led the 1401 design team. The 1401 system made enterprises of all sizes believe a computer was useful, and even essential.

By the late 1950s, computers had experienced tremendous changes. Clients drove a desire for speed. Vacuum-tube electronics replaced the electro-mechanical mechanisms of the tabulating machines that dominated information processing in the first half of the century. First came the experimental ENIAC, then Remington Rand’s Univac and the IBM 701, all built on electronics. Magnetic tape and then the first disk drives changed ideas about the accessibility of information. Grace Hopper’s compiler and John Backus’s FORTRAN programming language gave computer experts new ways to instruct machines to do ever more clever and complex tasks. Systems that arose out of those coalescing developments were a monumental leap in computing capabilities.

Still, the machines touched few lives directly. Installed and working computers numbered barely more than 1000. The world, in fact, was ready for a more accessible computer.

The first glimpse of that next generation of computing turned up in an unexpected place:France. “In the mid-1950s, IBM got a wake-up call,” said Branscomb, who ran one of IBM’s lines of accounting machines at the time. French computer upstart Machines Bull came out with its Gamma computers, small and fast compared to goliaths like the IBM 700 series. “It was a competitive threat,” Branscomb recalled.

Bull made IBM and others realize that entities with smaller budgets wanted computers. IBM scrambled together resources to try to make a competing machine. “It was 1957 and IBM had no new machine in development,” Branscomb said. “It was a real problem.”

During June and July 1957, IBM engineers and planners gathered inGermanyto propose several accounting machine designs. The anticipated product of this seven-week conference was known thereafter as the Worldwide Accounting Machine (WWAM), although no particular design was decided upon.

In September 1957, Branscomb was assigned to run the WWAM project. In March 1958, after Thomas Watson, Jr. expressed dissatisfaction with the WWAM project inEurope, the Endicott proposal for a stored-program WWAM was given formal approval as the company’s approach to meeting the need for an electronic accounting machine. The newly assigned project culminated in the announcement of the 1401 Data Processing System (although, for a time it carried the acronym SPACE).

The IBM 1401 Data Processing System—comprising a variety of card and tape models with a range of core memory sizes, and configured for stand-alone use and peripheral service for larger computers—was announced in October 1959.

Branscomb’s group set a target rental cost of US$2500 per month, well below a 700 series machine, and hit it. They also decided the computer had to be simple to operate. “We knew it was time for a dramatic change, a discontinuity,” Branscomb added. And indeed it was. The 1401 system extended computing to a new level of organization and user, driving information technology deeper into everyday life.

The full article can be accessed at http://www.ibm.com/ibm100/us/en/icons/mainframe/

Watson and The Future of ECM

In the past, I have whipped out my ECM powered crystal ball to pontificate about the future of Enterprise Content Management.  These are always fun to write and share (see Top 10 ECM Pet Peeve Predictions for 2011  and Crystal Ball Gazing … Enterprise Content Management 2020).  This one is a little different though …  on the eve of the AIIM International Conference and Expo at info360, I find myself wondering … what are we going to do with all this new social content … all of these content based conversations in all of their various forms?

We’ve seen the rise of the Systems of Engagement concept and number of new systems that enable social business.  We’re adopting new ways to work together leveraging technologies like collaborative content, wikis, communities, RSS and much more.  All of this new content being generated is text based and expressed in natural language.  I suggest you read AIIM’s report Systems of Engagement and the Future of Enterprise IT: A Sea Change in Enterprise for a perspective on the management aspects of the future of ECM.  It lays out how organizations must think about information management, control, and governance in order to deal with social technologies.

Social business is not just inside the firewall though.  Blogs, wikis and social network conversations are giving consumers and businesses a voice and power they’ve never have before … again based in text and expressed in natural language.  This is a big deal.  770 million people worldwide visited a social networking site last year (according to a comScore report titled Social Networking Phenomenon) … and amazingly, over 500 billion impressions annually are being made about products and services (according to a new book Empowered written by Josh Bernoff and Ted Schadler).

But what is buried in these text based natural language conversations?  There is an amazing amout of information trapped inside.  With all these conversations happening between colleagues, customers and partners … what can we learn from our customers about product quality, customer experience, price, value, service and more?  What can we learn from our internal conversations as well?  What is locked in these threads and related documents about strategy, projects, issues, risks and business outcomes.

We have to find out!  We have to put this information to work for us.

But guess what?  The old tools don’t work.  Data analysis is a powerful thing but don’t expect today’s business intelligence tools to understand language and threaded conversations.  When you analyze data … a 5 is always a 5.  You don’t have to understand what a 5 is or figure out what it means.  You just have to calculate it against other numeric indicators and metrics.

Content … and all of the related conversations aren’t numeric.  You must start by understanding what it all means, which is why understanding natural language is key.  Historically, computers have failed at this.  New tools and techniques are needed because content is a whole different challenge.  A very big challenge.  Think about it … a “5” represents a value, the same value, every single time.  There is no ambiguity.  In natural language, the word “premiere” could be a noun, verb or adjective.  It could be a title of a person, an action or the first night of a theatre play.  Natural language is full of ambiguity … it is nuanced and filled with contextual references.  Subtle meaning, irony, riddles, acronyms, idioms, abbreviations and other language complexities all present unique computing challenges not found with structured data.  This is precisely why IBM chose Jeopardy! as a way to showcase the Watson breakthrough.

IBM Watson (DeepQA) is the world’s most advanced question answering machine that uncovers answers by understanding the meaning buried in the context of a natural language question.  By combining advanced Natural Language Processing (NLP) and DeepQA automatic question answering technology, Watson represents the future of content and data management, analytics, and systems design.  IBM Watson leverages core content analysis, along with a number of other advanced technologies, to arrive at a single, precise answer within a very short period of time.  The business applications for this technology are limitless starting with clinical healthcare, customer care, government intelligence and beyond.

You can read some of my other blog postings on Watson (see “What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy! … or better yet … if you want to know how Watson actually works, hear it live at my AIIM / info360 main stage session IBM Watson and the Impact on ECM this coming Wednesday 3/23 at 9:30 am.

BLOG UPDATE:  Here is a link to the slides used at the AIIM / info360 keynote.

Back to my crystal ball … my prediction is that natural language based computing and related analysis is the next big wave of computing and will shape the future of ECM.  Watson is an enabling breakthrough and is the start of something big.  With all this new information, we’ll want to use to understand what is being said, and why, in all of these conversations.  Most of all, we’ll want to leverage this new found insight for business advantage.  One compelling and obvious example is to be to answer age old customer questions like “Are our customers happy with us?” “How happy” “Are they so happy, we should try to sell something else?” … or … “Are our customers unhappy?” “Are they so unhappy, we should offer them something to prevent churn?” Undestanding the customer trends and emerging opportunities across a large set of text based conversations (letters, calls, emails, web postings and more) is now possible.

Who wouldn’t want to undertstand their customers, partners, constituents and employees better?  Beyond this, Watson will be applied to industries like healthcare to help doctors more effectively diagnose diseases and this is just the beginning.  Organizations everywhere will want to unlock the insights trapped in their enterprise content and leverage all of these conversations … in ways we haven’t even thought of yet … but I’ll save that for the next time I use my ECM crystal ball.

As always … leave me your thoughts and ideas here and hope to see you Wednesday at The AIIM International Conference and Expo at info360 http://www.aiimexpo.com/.