Feeds:
Posts
Comments

Archive for the ‘Industry Perspective’ Category

Walter Frick has written a great blog post about the issue of human and algorithm collaboration.  While it is a great philosophical question, I am interested in it from a systems thinking perspective.

The central question seems to be: Human as a hero (humans augment algorithms based judgment) or human as a hazard (humans provide inputs that will make better algorithms for automated decisions) in the decision-making context

With all the promise of Big data in business, I believe it is a key design choice for ‘smart’ systems that are being deployed to improve business decisions – what the next best product that the customer should be offered, which patients should be targeted for medication adherence intervention, or where to locate the manufacturing capacity for an efficient supply chain etc. The human as a hero vs. human as a hazard choice will define whether the smart systems are accepted, adopted and ultimately drive value.

Based on my experience, I do not believe it is one or the other that is superior.

I have seen consumer goods companies who have deployed suggested ordering systems where the algorithms were accurate 90% of the times about the expected order and the sales person was able to override the systems when things did not match his/her experience or they had unique insight (like they know about a church event over the coming weekend that will increase consumption and the suggested order needs to be bumped up). And I have seen examples of retailers that do not want any store replenishment decision to be made by humans once the input parameters are set. They want the algorithms to take over and it has worked for them.

There are three factors that influence the design choice.

  1. Talent in the organization
    The quality of the decision maker defines how much leeway you provide.  I have seen clients in emerging markets with double-digit growth rates and facing a perennial talent shortage of good sales people, trying to use these automated ‘smart’ systems to standardize decision-making. At the other end of the spectrum are the startups in silicon valley who are designing systems to aid the physicians in making evidence based decisions.
  2. Organization’s decision making culture
    Organizations have an inherent decision-making culture which will influence whether human as a hero model will work or not. At a high level there are organizations who have a command and control structure of decision-making (e.g., a technology company that has a central team of pricing experts who determine the pricing for all their business customers across all lines of business) and there are organizations which have a very decentralized decision-making culture (another technology company where each division or business unit can set its own pricing and discounts)
  3. Type of decisions to be improved
    Automated decision-making systems are efficient when the goal is to improve the average across all decisions and the range of decisions follows a normal distribution. Basically there are less extreme or one of a kind decisions. A lot of high frequency operational decisions (think call centers/factory floors) in organizations will follow this behavior. However, when most of the decisions are one of a kind and ‘extreme’ events then human as a hero model becomes more appropriate. A lot of infrequent strategic decisions will fall into this category.

Human as a hero vs. Human as a hazard is an explicit design choice to be made and organizations that are able to make the right choice will have less false starts and drive more value for themselves and their customers.

Advertisement

Read Full Post »

Theme 5: Good data visualization leads to smarter decisions.

[tweetmeme source=”atripathy” only_single=false]The above visualization has arguably saved more lives than all predictive analytics initiatives combined.  It is visualization created in 1854 by Dr. John Snow. As Miles Dowsett has written,

In 1854 London was the biggest city the world had ever seen, and was literally drowning in its own filth. There was no sewage system in place and those that lived in the capital literally threw their waste and cesspits out into the crowded streets and river Thames. As a result, London was a disgustingly smelly place to be and was periodically engulfed by disease – most notably Cholera.

However at that time it was believed that it was the smell of London that was the root cause of diseases such as cholera. It came to be known as the Miasma theory of disease. However, Dr. Snow was skeptical of the Miasma theory of smell causing disease and in his 1849 essay On the Mode of Communication of Cholera actually introduced the water borne theory of Cholera. He presented his findings to London’s health authorities but couldn’t convince anybody of its merit, and was largely ignored.

In August, 1854 there was another outbreak of cholera in the Broad Street neighborhood. Over a 10 day period cholera decimated the population; 10% of the neighborhood died.

Since Dr. Snow was convinced that Cholera was a waterborne virus, it didn’t take him very long to identify that it was the infected water pump at 40 Broad Street that caused the epidemic.  Instead of solely relying on numbers, he produced a visual of the Broad street neighborhood where he marked the water pump at 40 Broad Street and designated each cholera death with a bar. The concentration of deaths around the pump at 40 Broad Street, which trailed off the further out from the pump one went, was so convincing that the authorities finally accepted the theory of Cholera being water borne. They removed the handle of the pump and it ultimately stopped the spread of the disease in 1854. It also paved the way for a sewage and sanitation systems to be put in place; one of the greatest engineering feats to be undertaken in London’s history, changing the way that urban systems exist and continue to grow to this day.

As the story demonstrates, good data visualization leads to smarter decisions.  Far too often, analysts focus all their efforts on data collection and modeling and pay very little attention to presenting the results in a way that decision makers can relate to.  Every successful predictive analytics project leads to a change from the status quo way of doing things, and it is never easy to convince the decision makers that the new way is better.  Most decision makers need more than R square or a mean absolute percentage error metric to be convinced about the efficacy of a solution based on predictive analysis and feel comfortable approving the change. This is where data visualization skills become important.

James Taylor wrote that visualization is more relevant in context of strategic decisions and not so much for the operational decisions.

Decision making at the operational level is too high-speed, too automated for much in the way of visualization to be useful at the moment of decision.

While I do agree with him, I know of instances where visualization has been creatively used in the very operational environment of call centers. Speech analytics and emotion detection are growing areas in call center technology where depending upon the choice of words, the speech analytics system detects the emotion level of the caller and displays an appropriate emoticon on the agent’s desktop right when the call is transferred to them. Even without understanding the complexity of the caller’s issue, the agent immediately gets a guidance about the emotional state for the caller.

As consultants we are always trying to create that ‘money-visual’ in our presentations, a slide which brings all the analyses together and unambiguously calls out for a need to change or drive action. I feel every predictive analytics project needs one such ‘money-visual’.

Do you have any examples of visuals which you have used to convince people to drive change?

Here are the links to previous postings: part 1, part 2, part 3 and part 4

Cross-posted on TheInfromationAdvantage.com

Read Full Post »

Theme 4: Statistical techniques and tools are not likely to provide competitive advantage

[tweetmeme source=”atripathy” only_single=false]I read this interesting post from Sijin describing his journey to master a video game (emphasis added by me)

All this kind of reminded me of my experiences with finding the perfect weapon while playing Call of Duty 4 over the past year. I spent 3 hours a day almost every day for the past one year playing this game, reaching the max prestige level (the “elite” club) in the multi-player version. I became really good at it… no matter what weapon I was using. But I remember when I started out and I really sucked, I became obsessed with finding the perfect weapon with the perfect set of perks and add-ons. I used to wander the forums asking people about which weapons and perks to use on which map and what the best tips were etc. Thinking that having the perfect weapon would make me a good player. In the end, the only thing that mattered was all the hours I put in to learn all the maps, routes, tricks and my ability (I like to think). The surprising thing was that once I mastered the game, it didn’t really matter what weapon I chose, I was able to adapt any weapon and do a decent job.

This story captures the essence of the theme of this post.

The popular statistical techniques frequently used in business analytics like linear regression and logistic regression are more than half-a-century old. System dynamics was developed in 1950s. Even neural networks have been around for more than 40 years. SAS was founded in 1976 and the open source statistical tool R was developed in 1993. The point is that popular analytical techniques and tools have been around for some time and their benefits and limitations are fairly well understood.

An unambiguous definition of the business problem that will impact a decision, a clear analysis path leading to output, thorough understanding of various internal and 3rd-party datasets are all more important aspects of a predictive analytics solution than the choice of the tool. Not to mention having a clear linkage between the problem, the resulting decision, and measurable business value.  The challenge is in finding an expert user who understands the pros and cons and adapts the tools and techniques to solve the problem at hand. Companies will be better served by investing in the right analytical expertise rather than worrying about the tools and technique as the right analytical team can certainly be a source of competitive advantage.

While this theme is fairly well understood within the analytics practitioner community, the same cannot be said about business users and executives. It is still easy to find senior executives who believe that ‘cutting edge’ techniques like neural networks should be used to solve their business problem or predictive analytics tools are a key differentiator while selecting analytics vendors.  The analytics community needs to do a better job in educating the business user and senior executives about this theme.

You can read the previous installments of the series here (part 1, part 2, and part 3).

Cross-posted on TheInformationAdvantage.com

Read Full Post »