Morning star has a story about rise of passive investing which is an interesting use case of human as a hero vs. human as a hazard discussion. They conclude:
.. the growing use of passive investment vehicles reflects the times in which we live. With algorithms helping determine which online ads we’re exposed to each day and new metrics being invented all the time to aid in arenas as diverse as business, politics, and sports, perhaps it should come as no surprise that more people are willing to rely on an inexpensive, systematic, formula-based approach to investing rather than on the judgment and decision-making ability of a living, breathing fund manager.
The article also talks about the obvious next step – that the simple index tracking form of passive investing will be supplemented with smarter algorithms for those investment decisions.
More recently, increasing attention has been paid to alternative indexing approaches–so-called smart beta–that are built around specific factors (stock price/earnings ratios, company performance, share-price volatility, to name a few). Some consider this a hybrid of indexing and active management styles.
This seems to be a case where the human as a hazard point of view is winning and the design choice has been to upfront capture human insights in algorithms (as in smart beta approaches). Going back to the factors that influence the design choice, the factor that matters here is the most is the type of decision that is being influenced. Investment returns is not about managing extreme decisions but improving the average across several small decisions with well-established rules of good vs. bad decisions.
Read Full Post »
Walter Frick has written a great blog post about the issue of human and algorithm collaboration. While it is a great philosophical question, I am interested in it from a systems thinking perspective.
The central question seems to be: Human as a hero (humans augment algorithms based judgment) or human as a hazard (humans provide inputs that will make better algorithms for automated decisions) in the decision-making context
With all the promise of Big data in business, I believe it is a key design choice for ‘smart’ systems that are being deployed to improve business decisions – what the next best product that the customer should be offered, which patients should be targeted for medication adherence intervention, or where to locate the manufacturing capacity for an efficient supply chain etc. The human as a hero vs. human as a hazard choice will define whether the smart systems are accepted, adopted and ultimately drive value.
Based on my experience, I do not believe it is one or the other that is superior.
I have seen consumer goods companies who have deployed suggested ordering systems where the algorithms were accurate 90% of the times about the expected order and the sales person was able to override the systems when things did not match his/her experience or they had unique insight (like they know about a church event over the coming weekend that will increase consumption and the suggested order needs to be bumped up). And I have seen examples of retailers that do not want any store replenishment decision to be made by humans once the input parameters are set. They want the algorithms to take over and it has worked for them.
There are three factors that influence the design choice.
- Talent in the organization
The quality of the decision maker defines how much leeway you provide. I have seen clients in emerging markets with double-digit growth rates and facing a perennial talent shortage of good sales people, trying to use these automated ‘smart’ systems to standardize decision-making. At the other end of the spectrum are the startups in silicon valley who are designing systems to aid the physicians in making evidence based decisions.
- Organization’s decision making culture
Organizations have an inherent decision-making culture which will influence whether human as a hero model will work or not. At a high level there are organizations who have a command and control structure of decision-making (e.g., a technology company that has a central team of pricing experts who determine the pricing for all their business customers across all lines of business) and there are organizations which have a very decentralized decision-making culture (another technology company where each division or business unit can set its own pricing and discounts)
- Type of decisions to be improved
Automated decision-making systems are efficient when the goal is to improve the average across all decisions and the range of decisions follows a normal distribution. Basically there are less extreme or one of a kind decisions. A lot of high frequency operational decisions (think call centers/factory floors) in organizations will follow this behavior. However, when most of the decisions are one of a kind and ‘extreme’ events then human as a hero model becomes more appropriate. A lot of infrequent strategic decisions will fall into this category.
Human as a hero vs. Human as a hazard is an explicit design choice to be made and organizations that are able to make the right choice will have less false starts and drive more value for themselves and their customers.
Read Full Post »