Stream-Mining Work: Connecting the Crosscurrents of Knowledge by Kelvin F. Cross

Stream-Mining Work: Connecting the Crosscurrents of Knowledge by Kelvin F. Cross
January 13, 2011 Kelvin Cross

The vice president of leading HMO (Health Maintenance Organization) laments: “We know how to streamline work from point A to point B, but we don’t know how to routinely and rapidly streamline learning from our collective experiences, especially across our disparate functions and  processes.” She is not alone. 

Many see that streamlining processes has worked, but they want more. Process improvement programs (Lean, Six Sigma, reengineering and the like) have made great strides in unclogging the flows of work within core processes. But these techniques have not dislodged and channeled the flows of information and knowledge among core processes.  For instance…

  • Years ago, an engineer at Wang Laboratories commented: “We streamlined the new product development process, enabling Wang to develop the wrong products faster!” And we all know what happened to Wang.
  • A CFO complains: “In our new sales process we continue to make bad deals with bad customers.  I thought we agreed to focus on the money makers and abandon the losers.”

For these, and many other business problems, their resolution depends upon ‘stream-miningwork’ – dislodging knowledge and distributing it to where it can best be used among core processes.

For instance, the data from the Customer Care Process may be used to define a customer segment as unprofitable.  Based upon that information, the Business Planning Process may produce a declaration that the customer segment is undesirable.  Then it is the Customer Acquisition Process which will utilize the information to avoid that customer segment, or to re-price the product or service for that segment.

Like it or not, core processes are intertwined and interdependent.  It is these crosscurrents of knowledge flow which enables processes to do the right things well … rather than the wrong things well.

So how do we effectively dislodge knowledge and connect these crosscurrents? How do we stream-mine work?

Webs & Wikis? … Intriguing, but …  

The HMO’s VP, like others, is intrigued with the emergence of Web 2.0 technologies (Wikis, Blogs, Social Networks, Twitter, etc.), but expresses her highly cynical outlook:  “It’s a consultant’s dream … big ideas, big picture, and big money … and little definition.”  To a degree she’s right.

Web 2.0/Enterprise 2.0, online communities and the like, represent an emerging field with lots of activity, and increasing pressures to jump onboard and do something.  However, the CFO suggests that rather than worry about the semantics of “Web 2.0” and the like, let’s look at the basics.  Data needs to be gathered, information produced and delivered, and then used to produce value at various points in the business processes.

The trick is in defining the collection points, the dissemination points, and the “big picture” priorities for managing these flows of knowledge.   Now we need to stream-mine our processes and effectively connect the crosscurrents of knowledge.  So where do we begin?

Begin with the Business Processes

The process work done by most corporations over the last decade provides the foundation for managing knowledge flows.   Core processes have been defined and detailed and process flows have been documented.

Broad strategic efforts, around core process definitions have enabled organizations to see the big picture of how their companies work.  In the grand scheme of things, Product Development comes before Customer Acquisition which comes before Service Delivery or After Sales Service. Some call this view Life Cycle Management.

For some organizations the “life cycle” refers to the product, for others it refers to the customer experience.  In either case the intent is to view the entire business as a complete process.   With this “big picture” view companies can be sure they focus on improving and managing complete processes from the customers’ perspective (e.g. New Product Introduction: from product idea to product available for sale; or Order Fulfillment: the day-to-day order through delivery of the product or service).

From the HMO, an example of a core process view is depicted below in Figure 1.  It portrays seven core processes, supported by various enabling processes.

For this HMO, the “big picture” core process flow is clear … especially if we think of what it takes to start a business from scratch.  It begins with a business plan, then development of the product/service.

Next the infrastructure is prepared to deliver the service, and then customers are acquired. Data must then be entered and maintained, in order to effectively deliver member wellness programs and to provide patient health care.

As time goes on this flow continuously repeats itself, as new plans are developed, services introduced, etc.

Experience begets knowledge and renewal.  It is the knowledge transfer among the core processes, vaguely depicted by the lines and arrows above the seven blocks, which enables that renewal.

Figure 1: One HMO’s Core Processes

Focus on Knowledge Flows

The first step of many companies has been to apply various technologies to unclog these linkages and to enable access to information and knowledge.   The various incarnations of technology-based “knowledge management” solutions (e.g. data warehousing, data mining, wikis, blogs, social networking, etc.) has enabled the availability of more and more critical information throughout the business.

Unfortunately, while these technologies have been liberating, in many cases they have become a catalyst for more information chaos and overload.  The unclogged and freeform flows of knowledge now need to be channeled.  As suggested by the top lines and arrows on the HMO’s core process diagram, the all-encompassing flows (where everything is related to everything) are not particularly helpful.  Years ago an Andersen Consulting knowledge manager was quoted as saying, “We’ve got so much knowledge in our Knowledge Xchange repository that our consultants can no longer make sense of it.  For many of them it has become data.”(Davenport and Prusak 1998)  Now in 2010, data overload has only gotten worse.

It is the individual point-to-point flows where the value of knowledge can be examined.  For example, what specific information is created in the Business Planning Process which would be of great benefit well downstream in the Service Delivery Process – and vice-versa?  This one example suggests there are two flows, a feed forward and a feedback, for every pairing of core processes.

So how can we sort through all the possible point-to-point flows for all the core processes?

The Knowledge Flow Grid

The core process framework (from Figure 1) provides the means to evaluate the point-to-point knowledge flows at a high level.  It is at this high level where the major benefits of connecting the cross-currents of knowledge flows will emerge.  Much like the process work of the past decade has broken down the functional silos, a focus on knowledge flow can break down the process silos.

At the core process level, each interconnection deserves a look.  At first it sounds unwieldy to list each core process and determine how knowledge might flow to and from each. However, the Knowledge Flow Grid (shown in Figure 2) provides the means to perform such an evaluation.

In this grid, the HMO’s core processes are depicted in order from left to right.  Much like a mileage table on a map, the grid is intended to show how one place relates to another. However, in this case we look at two distinct connections: (1) the “Feed Forward” of information downstream, and (2) the “Feed Back” of information upstream.  Hence, the blocks on the top represent the Feed Forward relationships, while the lower blocks reflect the Feed Back linkages.

In this HMO example, where there are seven core processes, the number of interfaces requiring evaluation is 42; 21 for all the possible “feed forward” relationships, and 21 for all the possible “feedback” relationships.

Figure 2: The Knowledge Flow Grid

The intent should be to understand and manage the potential opportunities within every single box on the grid.  Let’s look at one example:

Business Planning > Acquire Customers (Employers) & Members

It is typically the planning process which defines and selects specific customer segments as targets.  Once defined, and as these targets are continuously refined, the acquisition process can then build in specific points where selected customer segments are whisked through the process, while the non-target customer segments are encouraged to go elsewhere.

So one task, for executives in the context of business planning, is related to customer segmentation. Any input/knowledge on that subject from throughout the business should be welcomed. The trick is to find a mechanism to capture the hidden knowledge as ‘feedback’ from the other core processes, so that it can be funneled through business planning. Then it can be synthesized and ‘fed forward’ to customer acquisition as definitions of, and objectives for, various customer segments.

The intent of the Knowledge Flow Grid is to force an evaluation of each one-to-one link among the core processes.  This evaluation means first defining the key information needs and content (whether available today or not) for every intersection on the grid.

Next an evaluation of priorities among the blocks can help narrow and focus any knowledge flow development work.

Then the concept of a knowledge flow needs to be converted into a pragmatic design for implementation and day-to-day use.

From Concept to Reality

The Knowledge Grid provides a way to think about the sources and destinations for critical information among core processes.  A nice theory, but will data be captured in the first place? Does it exist, or will new processes and procedures be needed for data capture?

Building in more process steps with record keeping & reporting requirements, data collection points, measurement points, and other vestiges of bureaucracy may not be the best way.

Perhaps a couple of simple blogs and wiki like forums can provide the means for capturing and surfacing key learnings. However, this solution for data capture also adds additional effort to the workday. Plus mandated status reports and the like typically provide a distorted view of reality.

The most likely sustainable solution is to find data sources that exist today (recorded phone calls from the call center, wikis/forums and the like, and emails).  Here lies an enormous amount of relatively unstructured data, not distorted by overt demands to produce information, but rather these data are produced in the normal course of work.

The key question is: “How can we best capture the data, convert it to information, and channel it most appropriately?”

The Knowledge Grid provides a vehicle for Process Stream-Mining how to best tag data (such as an email, or voice-recognized customer care phone calls) based on: 1) the overt source and destination and parties impacted, plus 2) the less overt, but equally important, the identification of destination parties that should be interested based on content.  The key is in defining where the information is most vital and where it truly becomes valuable at a specific point in the process.

The following steps provide the means to identify, summarize, and channel information to appropriate destinations:

Step 1:  Define core processes – to insure that, in the big scheme of things, the important process silos are tapped as well as permeated by the flows of information and knowledge;

Step 2: Identify/allocate employees to a core process – For example, senior executives would for the most part be considered part of the “Business Planning & Mgmt Process,” whereas Provider Relations people (responsible to get doctors into their network) are part of the “Build Delivery System & Back Office.”

Step 3: Identify the information and knowledge content that is, and should be, flowing between each core process (feed forward and feedback) and the key sources of data (e.g. email).

Step 4:  Utilize automated analytical tools (such as MessageMind’s C-Mail, or voice recognition from Nuance, Nice, or CallMiner) to automatically tag the data based on source, destination, content, timing, etc., and to synthesize/summarize, and provide new and vital information that here-to-fore has been an untapped resource.

Stream-Mining can produce a wide variety of insights in a number of areas, such as:

Strategic alignment gap analyses

–    Determine if effort/communications are being spent on strategic priorities

–    Determine areas of excessive energy (emerging crisis or simply misguided)

Continuous improvement

–    Identify areas of omission – where there should be activity, and there is none

–    Identify areas of commission – areas of high activity, where there should be low activity

–    Identify bottlenecks to info flow

–    Identify specific problems, and likely candidates to provide solutions

The undercurrents of knowledge flow can resemble either a vicious crosswind or a strong tailwind affecting your organization. The former can tear at your people and resources, pushing them off course in unexpected and potentially dangerous directions. The latter will get you to your intended destination with great speed and efficiency.

© 2010 by Kelvin F. Cross

 

Kelvin Cross is the president of Cross-Rhodes Associates, LLC. (formally Corporate Renaissance, Inc.) – a 15-year-old consulting firm that helps companies rethink and optimize how work gets done.  He developed the RootCallsSM Solutions service as a cost-effective means to uncover and eliminate unnecessary customer contacts.  Kelvin is the author of four books, including the recent “Quick Hits: 10 Key Surgical Strike Actions for Improving Business Process Performance.”

Contact him at kelvin@cross-rhodes.com.

Listen to our free monthly podcast!

Listen