Feeds:
Posts
Comments

Archive for the ‘Analytics Culture’ Category

Walter Frick has written a great blog post about the issue of human and algorithm collaboration.  While it is a great philosophical question, I am interested in it from a systems thinking perspective.

The central question seems to be: Human as a hero (humans augment algorithms based judgment) or human as a hazard (humans provide inputs that will make better algorithms for automated decisions) in the decision-making context

With all the promise of Big data in business, I believe it is a key design choice for ‘smart’ systems that are being deployed to improve business decisions – what the next best product that the customer should be offered, which patients should be targeted for medication adherence intervention, or where to locate the manufacturing capacity for an efficient supply chain etc. The human as a hero vs. human as a hazard choice will define whether the smart systems are accepted, adopted and ultimately drive value.

Based on my experience, I do not believe it is one or the other that is superior.

I have seen consumer goods companies who have deployed suggested ordering systems where the algorithms were accurate 90% of the times about the expected order and the sales person was able to override the systems when things did not match his/her experience or they had unique insight (like they know about a church event over the coming weekend that will increase consumption and the suggested order needs to be bumped up). And I have seen examples of retailers that do not want any store replenishment decision to be made by humans once the input parameters are set. They want the algorithms to take over and it has worked for them.

There are three factors that influence the design choice.

  1. Talent in the organization
    The quality of the decision maker defines how much leeway you provide.  I have seen clients in emerging markets with double-digit growth rates and facing a perennial talent shortage of good sales people, trying to use these automated ‘smart’ systems to standardize decision-making. At the other end of the spectrum are the startups in silicon valley who are designing systems to aid the physicians in making evidence based decisions.
  2. Organization’s decision making culture
    Organizations have an inherent decision-making culture which will influence whether human as a hero model will work or not. At a high level there are organizations who have a command and control structure of decision-making (e.g., a technology company that has a central team of pricing experts who determine the pricing for all their business customers across all lines of business) and there are organizations which have a very decentralized decision-making culture (another technology company where each division or business unit can set its own pricing and discounts)
  3. Type of decisions to be improved
    Automated decision-making systems are efficient when the goal is to improve the average across all decisions and the range of decisions follows a normal distribution. Basically there are less extreme or one of a kind decisions. A lot of high frequency operational decisions (think call centers/factory floors) in organizations will follow this behavior. However, when most of the decisions are one of a kind and ‘extreme’ events then human as a hero model becomes more appropriate. A lot of infrequent strategic decisions will fall into this category.

Human as a hero vs. Human as a hazard is an explicit design choice to be made and organizations that are able to make the right choice will have less false starts and drive more value for themselves and their customers.

Advertisement

Read Full Post »

[tweetmeme source=”atripathy” only_single=false] Ken Rona tweeted earlier today about a subject which stuck a chord. He was writing about the difference between a business analyst and data analyst (or data scientist as they are increasingly called). I wanted to expand on the idea as It is important to distinguish between the two roles. I have seen a lot of confusion around the definitions and some executives thinking they are one and the same and others who believe they are totally different. The truth is probably somewhere in between. Here is my attempt at comparing along the key skill set dimensions:

Business Analyst Data Scientist
Business domain knowledge Expertise in industry domain Very good working knowledge of industry domain
Data handling
skills
Ability to handle multiple CSV files and import them into Access or Excel for analysis Ability to write SQL queries to extract data from databases and join multiple datasets together
Analytics skills Knowledge of simple business statistics(statistical significance, sampling), Able to use statistics functions in Excel Proficiency in advanced mathematics/statistics (regressions, optimization, clustering analysis etc.)
Insight presentation skills Storytelling skills using PowerPoint Storytelling skills using information visualization and PowerPoint
Problem solving
skills
Proficiency in hypotheses driven approach is good to have Proficiency in hypotheses driven approach is must have
Tools Access, Excel, PowerPoint etc MS SQL, Oracle, Hadoop, SQL SAS, SPSS, Excel, R, Tableau etc

I think this is a good starting point but can be refined further. Feedback/comments are welcome.

Read Full Post »

[tweetmeme source=”atripathy” only_single=false]Think about the large successful organizations which are known for harnessing information for competitive advantage; P&G, Goldman Sachs, Capital One, Harrah’s, Progressive Insurance and you will find one thing in common. Their C level executives drive data driven decision making top down. And the more organizations I see, the more I get convinced that it is one of the most important factors for a company which wants to ‘compete on analytics’.

Here is my hypothesis of why it is so:

There is a fundamental Catch 22 situation in most large companies. Organizations do not have consistently good quality data (mainly due to process issues during intake) and unless the data is used to making real business decisions, it is hard to improve its quality.

This Catch 22 can only be resolved by very senior executive (read C level)  who commits himself to making decisions and measuring performance based on analysis done with imperfect data (but good enough for many types of decisions/relative measurements). Once middle management understands how the data is being used, it spurs process changes to fix the quality issue which in turns increases the accuracy and reliability of analysis. The virtuous cycle is key for large companies which ‘compete on analytics’

In contrast, the middle management never wants to be in a situation to justify their decisions knowingly made using imperfect data. It is easier to justify subjective gut feel than objective decisions made with data with known quality issues.

In summary – the culture of analytics is a top down phenomenon

What do you think? Do you agree with this observation?

Photo credit

Read Full Post »