Transparency and Trust in the Cognitive Era

There are a lot of points of view emerging on artificial intelligence.  Elon Musk, Mark Cuban and others have been vocal on the responsible use or governance of artificial intelligence.  IBM’s version of this could really be called augmented intelligence because it is a more responsible approach to augmenting human decision making rather than simply replacing it .. as some would have it.

We all have to prepare for a future in which Artificial Intelligence (AI) plays a growing role.  The White House released a report on future directions and considerations for AI called Preparing for the Future of Artificial Intelligence.  This report surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy. The report also makes recommendations for specific further actions. A companion National Artificial Intelligence Research and Development Strategic Plan is also being released, laying out a strategic plan for Federally-funded research and development in AI.

We are in the early days of a promising new technology, and of the new era to which it is giving birth.  This technology is as radically different from the programmable systems that have been produced by the IT industry for half a century as those systems were from the tabulators that preceded them.

Commonly referred to as Artificial Intelligence, this new generation of technology and the cognitive systems it helps power will soon touch every facet of work and life – with the potential to radically transform them for the better.  This is because these systems can ingest and understand all forms of data, which is being produced at an unprecedented rate.

Cognitive systems like IBM’s Watson can reason over this data, forming hypotheses and judgments.  Most importantly, these systems are not simply programmed, they learn – from their own experiences, their interactions with humans and the outcomes of their judgments.

As with every prior world-changing technology, this technology carries major implications.  Many of the questions it raises are unanswerable today and will require time, research and open discussion to answer. It is both pragmatic and wise to establish principles to guide the evolution and adoption of AI.  IBM is establishing the following principles for the Cognitive Era:

Purpose: The purpose of AI and cognitive systems developed and applied by the IBM company is to augment human intelligence. The technology, products, services and policies will be designed to enhance and extend human capability, expertise and potential.  IBM’s position is based not only on principle but also on science. Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products and services by which business and society function – all of which will and should remain within human control.

Transparency: For cognitive systems to fulfill their world-changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear:

* When and for what purposes AI is being applied in the cognitive solutions IBM develops and deploys.

* The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.

* The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience.  IBM will work with it’s clients to protect their data and insights, and will encourage it’s clients, partners and industry colleagues to adopt similar practices.

Skills: The economic and societal benefits of this new era will not be realized if the human side of the equation is not supported. This is uniquely important with cognitive technology, which augments human intelligence and expertise and works collaboratively with humans.  Therefore, the IBM company will work to help students, workers and citizens acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.

IBM believes it’s experience over more than a century and the daily work with clients from every industry and sector around the world have taught it that transparency and principles that engender trust are important for both business and society.  However, IBM also recognize that there is much learning ahead for all of us. In that spirit, it is hoped that the publication of these tenets can spark an industry-wide – indeed, a society-wide – dialogue on the fundamental questions that must be answered, in order to achieve the economic and societal potential of a cognitive future.

Industry organizations like the Cognitive Computing Consortium have been out front on these kinds of issues.

As always, leave me your thoughts and comments below.

Cognitive Computing: A Once in a Generation Race

I was a slow runner as a boy (jokes to a minimum please).  I hated playing those backyard games where I had to chase people around.  I was soooooo slow.  I found myself looking for ways to outthink the other kids I was chasing.  Eventually, I became proficient at cutting around bushes, hopping fences and avoiding backyard obstacles (like garden hoses and trash cans) so that I could easily catch other kids.

That’s where we are with cognitive computing.  This is a once in a generation innovation opportunity .. with an intra/entrepreneurship chaser.  Are you currently the chaser or the chasee?  If you are a slow runner like me, now is the time to start outthinking your competition.

It’s simple.  We are entering a new era of computing called cognitive computing. It signifies a fundamental shift in how machines interact with us, other machines and the environment. It will provide much value, but it will be highly disruptive.

The Cognitive Era will result in entirely new model of computing that includes a range of technology innovations in analytics, natural language processing and machine learning.

“Over time, it will be possible to build cognitive technologies into many of the IT solutions and human-designed systems on earth, imbuing them with a kind of “thinking” ability. These new capabilities will enable people and organizations to accomplish things they couldn’t before–understanding more deeply how the world works, predicting the consequences of actions, and making better decisions.” Source: IBM.

A Cognitive Computing Consortium has been formed.  The Consortium is focused on advancing innovation in cognitive computing.  It develops cognitive computing definitions, conducts research, and participates in the development of industry definitions and standards. It brings together leading industry and academic thinkers to advance the understanding of the nature, importance, and potential impact of cognitive computing.  Sue Feldman, author of The Answer Machine, is the visionary behind this initiative.

The analysts also agree …

  • “The numbers in the new AI field are staggering: more than 2,300 startups (a comprehensive list can be found here) have been founded; venture capitalists are investing billions of dollars.” Source: Forrester Research.
  • “IDC expects the overall market to grow significantly in the 2015–2019 forecast period, at a CAGR of approaching 35%.” Source: IDC.
  • “Smart machines are not future fantasy; they are commercially available. According to Gartner’s analysis of external sources, more than $10 billion have already been purchased through more than 2,500 technology companies.”  Source: Gartner.

These kinds of shifts just don’t happen that often.  The last one started before I was born.

Starting in the 1890s .. we were introduced to Tabulating Systems.  Massive growth in people and things demanded single-purpose systems that could count.  For the first time, a program like US Social Security was possible.  Counting machines (remember punch cards) were the norm during the Tabulating Systems era.

In the 1950s .. Programmable Systems innovation disrupted the Tabulating Systems Era.  Increasing complexity of business and society demanded multi-purpose systems that could apply logic to perform pre-programmed tasks.  Integrated circuits and stored memory enabled a new computing model.  For the first time, a feat like landing a man on the moon was possible through Programmable Systems.

We are at the dawn of a new era … the Cognitive Systems era.  Exploding data, connected devices and industry transformation needs require real-time judgment from systems that sense, learn and understand to help humans make better decisions for better outcomes.  With technology augmenting and extending human knowledge, it is difficult to imagine what’s not possible.  Get ready … Cognitive Systems are already here.

We can now confer on every digitized object, product, process and service a kind of thinking ability.  Cognitive systems eliminate many existing barriers preventing access to the world’s knowledge … from the world’s experts.

The fact is .. humans are pretty good at common sense, dilemmas, morals, compassion, imagination, dreaming, abstraction and generalization.  But we get tired, get distracted, make errors, need sleep and occasionally want to go on vacation.

Cognitive systems are powerful too.  Better and more scalable at natural language, pattern identification, locating knowledge, machine learning, eliminating bias with endless capacity.

IBM Watson is a cognitive system.  Watson is a cloud-based, open platform of expanding cognitive capabilities. With Watson, you can build cognition into digital applications, products and operations.  Watson is creating a new partnership between people and computers that enhances, scales and accelerates human expertise.

The race is on .. the Cognitive Era is here.  Educate yourself and start experimenting with cognitive computing now.  The opportunities to innovate with cognitive capabilities border on endless.  Build a new solution … or extend an existing solution with cognitive capabilities … or embed a cognitive function into a process.   In short, don’t be the slow kid in the neighborhood.

As always, lead me your comments and questions below.

Why Bigger Should Always Be Faster … and Better

Let me say upfront that I was rooting for Golaith, not David.

I was recently asked to speak about some of IBM’s intrapreneurship initiatives at the upcoming Intrapreneurship Conference in New York during October 21-23. I have been conducting my own research on corporate entrepreneurship and have gotten to know the folks behind this organization .. it should be a good event. I will be speaking on the first day and shared some thoughts on this in a recent interview.

As I reflected on the interview .. it occurred to me how tired I am of all of the rhetoric in business publications these days about it being easy and commonplace for small innovative companies to disrupt large established ones. Some articles and books even pretend there’s a formula for doing this. It’s as if these much larger and proven companies are incompetent, have lost their way and are filled with unmotivated, slow witted human zombie idiot robots. To think that David always slays Goliath is too idealistic. It might help sell books or increase readership … but it’s not a predictor of business success or outcomes. It’s also foolish to underestimate any competitor by reducing them to a cliché … especially the ones who can squash you.  Has anyone noticed that Gillette didn’t lay down and die when Dollar Shave Club and Harry’s started a subscription service to try to disrupt Gillette’s core profit source of razor blades.

In an entrepreneurial and intrapreneurial career has spanned both start-ups and large corporations … I have been in key roles in both types of companies, and can tell you that there are advantages and disadvantages to each type.

Being fast, nimble and adaptable are essential traits when starting a new business or bringing a new innovative offering to market. These are even definitional attributes of start-ups. But no matter how nimble you are, you can’t birth of baby in one month if you put nine pregnant women on the job.

Large companies have significant and undeniable advantages over smaller (and allegedly more nimble) companies … notably resources and customers. The larger, the better. When mobilized properly, these advantages can be leveraged and rapidly applied in ways not possible by smaller, less resourced would-be competitors. Sure … large companies can be complex and have too much politics and red tape … but bring it on.  I’ll take money and customers every time.

That fact is, nothing can be as productive as working on an important initiative with a highly motivated and excited team of the most talented people you can imagine.

BOOM .. and there it is.   An Intrapreneurial Business Team. Done right, it’s like being on an all-star team … even exhilarating. You get to work with the best people or have access to subject matter expertise that start-ups can only dream of.

Where do you find Intrapreneurial Business Teams? In large companies, of course. It’s really the only way that a large matrixed organization can operate in a “start-up” like mode.

A small empowered team(s) approach is essential when siloed reporting, resource allocation and decision making model(s) are the norm. The typical large company model fundamentally disables a single person’s ability to lead all aspects of a innovation commercialization project.

Even though the fundamental goals and skills are the same for both types/sizes of companies … but the execution model and processes needed are completely different.

Here is an overview to the similarities and differences of the two approaches:

Intrapreneurs – Similarities

•   Requires vision and strategy.

•   Needs leadership and strong execution to succeed.

•   Needs internal funding.

•   Similar “learning” process of validate, plan, build, launch and grow.

•   Opportunity driven.

Entrepreneurs – Similarities

•   Requires vision and strategy.

•   Needs leadership and strong execution to succeed.

•   Needs external funding.

•   Similar “learning” process of validate, plan, build, launch and grow.

•   Opportunity driven.

Intrapreneurs – Differences

•   Mostly fearful including fear of failure, peer perception, embarrassment and confrontation.

•   Stakeholders motives are not just financial and include NIH syndrome, lack of alignment, skills, priorities or reward system.

•   Has to navigate existing culture and processes … and may have little or no influence over this.

•   Has advantages/starting points – ability to leverage customers, assets, brand and track record.

•   Funding is NOT guaranteed once secured.

•   Depends on Team Based Leadership

Entrepreneurs – Differences

•   Mostly fearless who are more likely to take risks, start over or have a pivot mentality

•   Stakeholder motives are almost exclusively financial or performance related (keeping investors happy is a top priority).

•   Has to create a new culture and must build teams, culture and more.

•   Starting from a blank page without track record – must secure customers and build trust

•   Funding is guaranteed once obtained.

•   Depends on a Strong Individual Leader

By embracing on these similarities and differences, large organizations can move as fast or faster than start-ups. Importantly, start-ups should study large companies they are taking aim at … before taking them on. Avoid the ones who are operating Intrapreneurially as shown above.

Lastly, if you are a publicly traded company … your organization must be committed to these principles (from the top down). Public companies have a fundamental conflict of interest in that innovation projects are usually longer term investments with unclear ROI in many cases. There is a natural tension between organic innovation investment and fiduciary shareholder budget responsibility … where innovation projects almost always lose out. Quarter to quarter financial decisions (cutbacks) have unintended downstream innovation consequences. Projects without a clear ROI, or without committed revenue, are usually the first place that cuts get made when the belt needs to be tightened. The larger the company, the more acute the problem. Watch out for this dynamic. It’s difficult to overcome without a top down commitment to change and innovation commercialization. Shareholders are always sitting in the first chair. These are the people paying for Goliath’s projects and they expect a return (and soon).

I am definitely looking forward to speaking at the Intrapreneurship Conference. I’ll talk more about Intrapreneurial Business Teams and will feature the Intrapreneurship@IBM program … a program designed to foster corporate entrepreneurship and help bring IBM’s innovation to market.

I founded the Intrapreneurship@IBM program and community as well as the associated 8 Minute Pitch program. I will cover some successes, challenges and failures as well as our future plans for these programs. I will also cover a deeper set of findings from a benchmark survey I recently conducted with over 500 innovation professionals (both non-IBM and IBM respondents).

Lastly, IBM has set out on a “moonshot” attempt at transforming healthcare. Bringing our innovation to market to part of that strategy and Intrapreneurship is a key success factor of this initiative. I plan to cover some of our innovation in healthcare including the innovative and world-renowned IBM Watson family of healthcare solutions.

I hope to see you in New York at the conference … and as always leave your thoughts and comments below.

Amputations or Analytics … a Call to Action for Entrepreneurs and Intrapreneurs Alike!

Doctor George Shearer practiced medicine in central Pennsylvania from 1825 to 1878 (in the Dillsburg area). He was a pillar of the community and is believed to have been an active surgeon during the Civil War. He was 61 at the time of the Gettysburg battle.

According to the National Library of Medicine, the exact number is not known, but approximately 60,000 surgeries, about three quarters of all of the operations performed during the Civil War, were amputations. Although seemingly drastic, the operation was intended to prevent deadly complications such as gangrene. There were no anti-biotics during this era.

Back then, amputation was the recommended treatment for major injuries, such as damage from gunshots or cannonballs. These amputations were performed with a handsaw, like the one Doctor Shearer used (shown below). During the war, surgeons prided themselves in the speed at which they could operate, some claiming to be able to remove a leg in under one minute. Ouch! Literally!


(Photo: Doctor George Shearer’s Actual Surgical Kit)

Keep in mind that local anesthetics were not invented until the 1880s and many procedures were performed without ether or chloroform … the only real anesthetics during the era.

In 1861, this was the best standard of care for those injuries. I think we can reasonably conclude that better treatment options (and outcomes) exist today.

Recently, The Mayo Clinic published an eye-opening report entitled, A Decade of Reversal: An Analysis of 146 Contradicted Medical Practices. The report focuses on a published medical practices and how effective they are. Things must have improved since 1861 … right?

The report examines published articles in prominent medical journals of new and established medical practices (such as a treatment guidelines or therapies), over a recent 10 year period (2001-2010). 2044 medical practice articles were reviewed. The findings are fascinating but one section of the report jumped off the page at me. Of the 363 articles that tested an existing standard of care, 40.2% reversed the original standard of care … and only 38.0% reaffirmed the original standard of care. The rest were inconclusive.

In other words, (in this case study) the current published medical standards of care are wrong MORE then often then they are correct. Wow!

I do feel obligated to point out that this is a very limited slice of the overall published standards of care … but still. It is just me … or is this mind-blowing!

I am not talking about gulping down some Jack Daniels so I don’t feel my leg being sawed off. This is researched and tested medical standards of care within the last 13 years. And yet … over 40% of the time, it’s WRONG. In fairness I should point out that they were right 38% of the time. No wonder the US Healthcare system checks in as the 37th best worldwide despite outspending everyone else by a huge margin (per capita).

It’s 150 years later, has the standard of care improved enough? We may not be sawing legs off at the same rate these days, but maybe it’s time for a new approach. Why are other industries so much farther ahead in leveraging their data with analytics to improve quality, reduce costs and improve outcomes? What could be more important then saving life and limb?

Years of data have been piling up in electronic medical records systems. Genomics is not new anymore. Isn’t it about time we brought analytics to this set of opportunities?

Some leading organizations already are … innovative solutions and companies are popping up to meet this opportunity. Entrepreneurs like Scott Megill, co-founder and CEO of Coriell Life Sciences, is a great example. Coriell Life Sciences is an offshoot of the Coriell Institute for Medical Research, a 60-year-old non-profit research organization. In 2007, the Institute launched an effort to bring genomic information to bear on health management. Coriell Life Sciences was established to commercialize the results of that research. Vast amounts of genetic information about individual patients has been available for a number of years, but it has been difficult to get at and expensive. “This company bridges the gap,” said Dr. Michael Christman, the Institute’s CEO.

Coriell’s approach is so innovative, they recently walked away with the coveted “IBM Entrepreneur of the Year” award.

Intrapreneurs at IBM have been busy commercializing the breakthrough innovation, IBM Watson – that originally debuted on Jeopardy! in 2011. Watson is based on a cognitive computing model.

Grabbing a few less headlines is IBM Patient Similarity Analytics, which uses traditional data driven predictive analysis combined with new similarity algorithms and new visualization techniques to identify personalized patient intervention opportunities (that were not previously possible).

These are a couple of obvious examples for me, but in reality we are just at the beginning of leveraging big data. New analytics and visualization tools must become the “handsaw” of today. We need these tools to be at the root of today’s modern standards of care.   If Dr. Shearer were alive today, you can bet his old surgical kit would be on the shelf, having been replaced by analytics that he could bring to the point of care.

For many Entrepreneurs and Intrapreneurs, the journey is just beginning, but there is a long way to go. A 2011 McKinsey report estimated that the healthcare industry can realize as much as $300 billion in annual value through analytics. Yowza!

What are you waiting for?

As always, leave my your thoughts below.

How Do Data Loopholes Slow Down the Treatment of Breast Cancer?

Considering it’s Breast Cancer Awareness Month, the timing of this post is hopefully helping a very important cause.  For reasons I won’t go into here, I’ve recently become more familiar with breast cancer then I would have otherwise.  When confronted with a new topic of interest, it’s my nature to dig in and learn everything I can about it.

The National Cancer Institute provides a wealth of information on breast cancer but being a “software guy” … the way a mammogram results combined with a clinical breast exam can detect early signs of cancer stood out to me as an important information issue.

I began to wonder where that information was captured and stored (after the test and examination) … and how it was ultimately used in follow-up care with the patient.  I didn’t expect to learn what I did.

The American College of Radiology (ACR) has established a uniform way for radiologists to describe mammogram findings.  The system is called BI-RADS and includes standardized structured codes or values.  Each BI-RADS code has a follow-up plan associated with it to help radiologists and other physicians manage a patient’s care.  These values are often used to trigger notifications of the findings or other follow-up steps.  This makes perfect sense to me except there is a (big data loophole) problem.

The BI-RAD findings (or values) are typically found on a text based report … or determined by the examining physician.  They are then captured or manually transcribed in the EMR as free text notes that are added to the medical record as text … unstructured data living in a structured data environment.  This is the loophole!  It’s technically there but not able to be used.

Sometimes this step can be missed completely and the results are not put into the EMR system at all (human error) … or, more likely, the BI-RAD value is not transcribed in the right place as a structured data field.  There are just two of the reasons reasons this loophole can be caused.

You may not be aware, but an Electronic Medical Records (EMR) system is generally optimized for structured data.  Most EMRs don’t leverage text based unstructured data (test results, physician notes, observations, findings, etc.) in ways that they could.  It’s a known weakness of many of today’s EMR systems.

To net this out … it’s entirely possible that cancer is detected using the BI-RADS value but the information does not find it’s way into the right place in the EMR system because it’s text based and the EMR cannot recognize it.  This EMR system limitation has no way of determining what the text based information is, or how to use it.

The impact of this is staggering.  Let’s think about this in terms of timely follow-up on cancer detection.  A system that is not able to use the BI-RAD value could mean patients are not being followed-up on properly (or at all) – even though they are diagnosed with breast cancer.  Yes, this  can actually happen if the value is buried in the text and not being used by the EMR.  The unstructured data loophole is a big deal!

Don’t take my word for it.  University of North Carolina Health Care (UNCH) has announced new findings from mining clinical data to improve the accuracy of its 2012 Physician Quality Reporting System (PQRS) measures, achieving double digit quality improvements in the areas of mammogram, colon cancer and pneumonia screening.  They are taking steps to close data loopholes.

The new findings indicate mammogram values are present in structured data 52% of the time … and present in unstructured data 48% of the time.  Almost half the time the unstructured data is not presented with the rest of the structured data.  Ouch, that’s a big data loophole.

The new findings also indicate CRC screening (colon cancer) values are present in structured data just 17% of the time … and present in unstructured data 83% of the time.  As a man of a certain age, this scares me in words that can’t be published.  Another big data loophole.

Thankfully leading organizations like UNCH are closing these data loopholes today with solutions that understand unstructured data and can “structure it” for use in EMR systems … pasted from an IBM press release dated today:

Timely Follow-up of Abnormal Cancer Screening Results:  Follow-up care for patients with abnormal tests is often delayed because the results are buried in electronic medical records.  Using IBM Content Analytics, UNCHC can extract abnormal results from cancer screening reports such as mammograms and colonoscopies and store the results as structured data.  The structured results are used to generate alerts immediately for physicians to proactively follow-up with patients that have abnormal cancer screening results.

This is an example of what IBM calls Smarter Care … where advanced analytics and cognitive computing can enable more holistic approach to individuals’ care, and can lead to an evolution in care delivery, with the potential for more effective outcomes and lower costs.  If an ounce of prevention is worth a pound of cure, an ounce of perspective extracted from a ton of data is priceless in potential savings.  IBM Content Analytics is part of the IBM Patient Care and Insights solution suite.

I’ve written several previous blogs on related topics that you might find interesting:

I am also speaking at the PCPCC Annual Fall Conference next Monday October 14th at 10am and will be discussing Smarter Care, UNCH’s findings and more.  Hope to see you there.

As always, leave me your feedback, questions and suggestions.

Healthcare Data is the New Oil: Delivering Smarter Care with Advanced Analytics

It has been said that “data” is the new “oil” of the 21st century.  That is certainly true in healthcare where a unique opportunity exists to leverage data – as fuel for better health outcomes.  Everything that happens with our health is documented … initially this was on paper … and more recently, in the form of electronic medical records.

Despite billions of incentive dollars being dolled out by the federal government to purchase Electronic Medical Record (EMR) systems and use in meaningful ways, there continues to be significant dissatisfaction with these systems.

In a recent Black Book Rankings survey, 80% surveyed claim their EMR solution does not meet the practice’s individual needs.  This is consistent with my own observations, where many express frustration that “the information goes in … but rarely, if ever, comes out”.

If the information never comes out, or it’s too hard to access, are we really maximizing its value?

It all boils down to our ability to leverage years and years of longitudinal patient population data to surface currently hidden insights … and put those insights to work to improve care.

It’s incredibly powerful to combine years of clinical patient population data (longitudinal patient histories) with other types of data such as social and lifestyle factors to surface new trends, patterns, anomalies and deviations.  These complex medical relationships (or context) trapped in the data are the key to identifying new ways to achieve better health outcomes.  Some organizations are already empowering physicians with these new insights.

Context can be critical in a lot of situations—but in healthcare, especially, it can be the difference between preventing a hospital readmission or not. It’s not enough, for example, to know that a patient has diabetes and smokes a pack of cigarettes each week. These factors are only part of the whole picture. Does she live on her own, with family or in a care facility? Does she have a knee injury that prevents her from an active exercise program? Has she been treated for any other illnesses recently? Did she experience a recent life-changing event, such as moving homes, getting a new job or having a baby? Is she able to cook meals for herself, does she rely on someone else to cook, or does she frequent cafeterias, restaurants or take-out windows?

All of these things and more can—and should—influence a patient’s care plan, because these are the factors that help determine which treatments will be most successful for each individual. And as our population grows and ages, a greater focus on individual wellness and increasing economic pressures are forcing providers, insurers, individuals and government agencies to find new ways to optimize healthcare outcomes while controlling costs.
Today’s data-driven healthcare environment provides the raw materials (or “oil”) to fuel this kind of personalized care, and make it cost-effective as well. But it takes savvy analysis to turn that data into the kind of reports and recommendations providers, patients and communities need to make informed decisions.

The good news: IBM is uniquely positioned to help organizations and individuals achieve these goals. The IBM® Smarter Care initiative draws on a comprehensive portfolio of advanced IBM technologies and services to help generate new patient insights that can improve the quality of care; facilitate collaboration among organizations, patients, government agencies and other groups; and promote wellness through a range of public health and social programs.

IBM Patient Care and Insights is a key component of the Smarter Care initiative. By incorporating advanced analytics with care management capabilities, Patient Care and Insights can produce valuable insights and enable holistic, individualized care.

Advanced analytics: Leading the way to Smarter Care

Several leading healthcare organizations are already on the path to Smarter Care and demonstrating the real-world benefits of advanced analytics from IBM. For example, in St. Louis, Missouri, BJC HealthCare—one of the largest nonprofit healthcare systems in the United States—is using natural language processing (NLP) and content analytics capabilities from IBM to extract information from patient records that are valuable for clinical research. By tapping into unstructured data, such as text-based doctors notes, BJC HealthCare is surfacing important social factors, demographic information and behavioral patterns that would otherwise be hidden from researchers.

BJC HealthCare is also using IBM technologies to reduce hospital readmissions for chronic heart failure (CHF). The organization is analyzing clinical data such as ejection fraction metrics (which represent the volume of blood pumped out of the heart with each beat) to better predict which patients are most likely to be readmitted. These insights enable providers to implement tailored interventions that can avoid some readmissions.

The University of North Carolina (UNC) Health Care is using Patient Care and Insights for three new pilot projects. First, UNC is employing NLP and content analytics on free-text clinical notes to discover predictors of hospital readmission, identifying patients at risk and improving pre-admission prediction models.

UNC is also using IBM technology to empower patients. IBM NLP technology is helping to transform clinical data contained electronic medical records (EMRs) into a format that can be presented to patients through an easy-to-use portal. Streamlined access to information will help patients make more informed decisions and encourage deeper participation in their own care.

Finally, UNC is using NLP to help generate alerts and reminders for physicians. With NLP, the organization is extracting key unstructured data from EMRs, such as abnormal cancer test results, and then storing this data in a structured form within a data warehouse. The structured data can then be used to produce alerts for prompt follow-up care.

This is just the beginning. As organizations continue to launch new projects that capitalize on advanced analytics, case management and other technologies from IBM, we expect to see some very innovative approaches to delivering Smarter Care.

Learn more about IBM Smarter Care by visiting:

For more about IBM Patient Care and Insights, visit:

As always, share your comments or questions below.

Moving Beyond One-Size-Fits-All Medicine to Data-Driven Insights with Similarity Analytics

Traditionally, Doctors have been oriented toward diagnosing and treating individual organ systems.  Clinical trials and medical research has typically focused on one disease at a time.  And today’s treatment guidelines are geared toward treating a “standard” patient with a single illness.

That’s nice… But the real world doesn’t work that way.

Most of us patients do not fit these narrow profiles … especially as we grow older and things get complicated.  We (patients) might display symptoms common to a variety of illnesses, or might already be suffering from multiple diseases.  Almost 25% of Medicaid patients have at least five comorbidities.[1]

This might explain why it’s estimated that physicians deviate from the recommended guidelines 40% of the time.  It might also explain why there is a real thirst in healthcare for evidence-based insights derived from patient population data.

In other industries, data-driven insights are often the only way organizations work with their customers.  Think of retailing and  Amazon analyzes your past purchases, your past clicks and other data to anticipate what you might need and present you with a variety of options all based on data driven insights.  You might think that by now, every industry would analyze data from the past to predict the future.

That’s not true in healthcare where treating complex patients can be challenging and technology to handle this level of complexity really hasn’t existed.  Treatment guidelines are sometimes vague and may not exist at all when a patient has multiple diseases or is at risk for developing them.  In other words, one-size-fits-all approaches tend to be self limiting.

Treating patients with multiple conditions is also costly. In fact, 76% of all Medicare expenditures apply to patients with five or more chronic conditions.[2]  To reduce costs, doctors need ways to identify early intervention opportunities that address not only the primary disease but also any additional conditions that a patient might develop.

Consequently, Doctors are forced to adopt ad hoc strategies that include relying on their own personal experiences (and knowledge) among other approaches.  Straying from those guidelines (where available) might not deliver the best outcomes but it’s been the only option they have … until now

Similarity analytics offers a way to augment traditional treatment guidelines, enabling healthcare providers to use individual patient data (including both structured and unstructured data) as well as insights from a similar patient population to enhance clinical decision-making.  With similarity analytics, healthcare providers and payers can move beyond a one-size-fits-all approach to deliver data-driven, personalized care that helps improve outcomes, increase the quality of care and reduce costs.

IBM similarity analytics capabilities, developed by IBM Research, play an essential role in IBM Patient Care and Insights … a comprehensive healthcare solution that provides a range of advanced analytics capabilities to support patient-centered care processes.  Here is a link to a video (with yours truly) from the recent launch in Las Vegas (my part starts at 8:45 mins).

How do similarity analytics capabilities work?

Let’s take an elderly patient with diabetes (a chronic disease) who presents with ankle swelling, dyspnea (difficulty breathing) and rales (a rattling sound heard during examination with a stethoscope).   Diabetes by itself is bad enough … but the care process gets more complicated (and more costly) when other comorbid conditions are present.

With these reported symptoms and observed signs, the patient might be at risk for other chronic diseases such as congestive heart failure.  But exactly how much at risk and when?

In the past, Doctors have had no way of knowing this.  There are tens of thousands of possible dimensions that need to be understood, analyzed and compared to get an answer to this question.  Think of a spreadsheet where the patient is a single row … and in that spreadsheet and there are 30,000 columns of data that need to be analyzed in an instant … and someone’s life could be at stake based on the outcome of the analysis.  In other words, Doctors have been handicapped in their ability to deliver quality care because of the absence of this type of analysis.

With IBM Patient Care and Insights (IPCI), a healthcare organization can collect and integrate a broad range of patient data from electronic medical records systems and other data sources (such as claims, socioeconomic and operational) … from past test results to clinical notes … into a single, longitudinal record.  Similarity analytics then enables the provider to draw on this comprehensive collection of data to compare the patient with other patients in a larger population.  With IBM Similarity Analytics (part of IPCI), the provider can analyze tens of thousands of possible comparison points to find similar patients … those patients with the most similar clinical traits at the same point in their disease progression as the patient in question.

Why is finding similar patients helpful?  First, providers can see what primary diagnoses and treatments have been applied to similar patients … some diagnoses and treatments might have otherwise eluded Doctors.  Second, providers (and payers) can identify hidden intervention opportunities … such as an illness that the patient is at risk of developing or the risk of the patient’s current condition deteriorating.  Surfacing hidden intervention opportunities is critical in addressing the costs and complexity of healthcare … especially when treating patients with multiple diseases.

Importantly, providers can also predict potential outcomes for an individual patient based on the outcomes of similar patients. Knowing what has happened to a patient’s peer group given certain treatments can help doctors hone in on the right intervention for this particular patient … before things take a turn for the worse.

There are many areas where similarity analytics are helpful.  Disease onset prediction, readmissions prevention, physician matching, resource utilization and management and drug treatment efficacy are just a few of the use cases.  My colleagues in IBM Research have been working on this technology for years.

By finding similar patients, pinpointing risks and helping to predict results, similarity analytics can ultimately help healthcare providers and payers improve the quality of care and deliver better outcomes, even for patients with multiple illnesses.  By working with other analytics capabilities to enable providers to apply the right interventions earlier, similarity analytics can also help pinpoint the specific risk factors for a given patient.  Those risk factors can become the basis for an individualized care plan.

In a future blog post, I’ll focus on the care management capabilities of IBM Patient Care and Insights so you can see how this solution helps put analytics insights into action.

Until then, learn more about IBM Patient Care and Insights by visiting:

Read specifically about IBM Research and Similarity Analytics by visiting:

As always …  look forward to reading your comments and questions.

[1] Projection of Chronic Illness Prevalence and Cost Inflation from RAND Health, October 2000.

[2] KE Thorpe and DH Howard, “The rise in spending among Medicare beneficiaries: the role of chronic disease prevalence and changes in treatment intensity,” <link:; Health Affairs 25:5 (2006): 378–388.