What does it take to be a data scientist?

What does it take to be a data scientist? The question is not new, but the answer has slightly changed.  The term ‘data science’ was coined in 2001 and serious practice commenced from 2010.  Early articles in 2010 mention about three characteristics of a data scientist: IT Skills, Math/Stat Skills and Domain Expertise. Possibly, there is nothing more to add to this triad even now.

However, the last four or five years have forced some changes in the underlying make-up of the triad. The increasing gap between IT and Business, rapid changes in computing and storage, explosion of data – especially unstructured data, arrival of new algorithms – particularly in the deep learning space, proximity of data scientists with top management, the idea of unlocking value from systems thinking, and increasing value creation opportunities from understanding interconnectedness of various industries are a few factors that are driving the change in the characteristics of the triad.

Consequently, data scientists have to involve themselves in operational side of the business (e.g. CRM systems), handle more unstructured data such as text, voice and image, possess data engineering skill sets, work more with High Performance Clusters and Big Data, move to structural equation modelling rather than simple linear equations, solve more math problems now than ever before, discuss opportunities, problems and solutions with senior management using BI & visualisation tools, have systems thinking and have multiple domain experience.

Therefore, in this post I revisit past intelligence, add new ones and make it comprehensive and current.

IT Skills:

In the context of a Data Scientist, IT Skills refer to the ability to understand fully the software world that is vital for her performance. It includes knowledge of databases and ways to handle them and of statistical or mathematical software packages. A lot appears to be changing in this area. In spite of vast amounts of data already available, data scientists are seeking new data to improve upon model performance. Characteristics of data is changing.  It will now increasingly be text, voice or image. In a separate development, untapped machine data in industries and IoT running into several EBs (Exa Bytes) are now available for analysis. All these calls for astute and robust technology to lift and analyse them. Every day, new libraries are being added to a body of open source technologies. What are the implications to a data scientist? She has to:

  1. Engineer new datasets: Even in a world of exploding data, a Data Scientist needs to know what data is required to answer a specific question, have them acquired if not previously available. To do so, some exposure to Data Planning is desirable. Data planning is the ability to (1) understand the enterprise’s end goal and the execution strategy, (2) convert that understanding into the lead and lag measures, metrics, etc and (3) lastly, have the data (measures and metrics) collected by the IT team. It calls for both knowledge of IT and Business Strategies. For e.g., data scientists  @ matrimony.com, a leading matchmaking portal, were asked to maximise call centre revenues from contacting members who abandoned mid-way a payment page visit. Payment page is a page on the portal that has information in the form of links about different packages, benefits of each of the packages, package comparison, several ways to be make payments, and more. Data scientists found missing several critical data in the data-mart. One being the time spent on the payment page and another being clicks on different links.  The data scientists worked with the IT teams in having such new data collected. Later, a log-regression analysis proved that members who spent more time on the page, made several clicks on the links, and visited more than once in the last one week had very high propensity score to pay. It led to better conversion rates and more revenue.
  2. Assemble existing data: A data scientist’s work commences with exploring several data in the data warehouse as well as the data lake. It calls for skills in handling data in unstructured and structured forms, or multiple database formats. Days of solving a problem in silos are over. A data scientist now takes data not just from one mart (e.g. sales), but also from other marts (such as operations and HR) calling for systems view. Some of the essential skills of a data scientist now include latest knowledge of:
    • Scripting languages such as Python, Scala and C++.
    • Databases such as SQL and NoSQL
    • Big Data / Open Source: Hadoop therein such as R, Hive, Pig, Spark and Scala. Hive is used more by data analysts and Pig is used more by programmers.
    • Parallel Databases and parallel query processing
  3. Handle statistical and math processing software: Once the data is in one’s grasp, the next step is to analyse it. The popular stats and match packages still are the ones that have been around for years if not decades. However, newer libraries are being added each day that provide better performance of models. Any one or more of the following, but updated, statistical / math tools would do:
    • R or Python. One may need pbdR (Programming with Big Data in R) for utilizing High Performance Clusters (HPC) and extremely large data sets / lake thereby allowing R to perform at very high processing requirements. There is interesting ‘R vs Python’ debate among data scientists.  The central purpose of Python is not Stats and is good when it comes to integrating codes into production systems. If you are developer, you will like Python. R on the other hand has had head start in stats with several inbuilt formulas. If you are stats person, you will like R. I really don’t have a preference.
    • Weka, or ADaMSoft, or Shogun, or Random Forrest, or OpenStat. These are some open source packages with their own unique capabilities. E.g. Random Forrest is good in ensemble techniques.
    • SPSS, or SAS, or Rapidminer. These are some proprietary software with great capabilities to perform end-to-end analytics.
    • LISREL or AMOS (now part of IBM SPSS) for structural equation modelling
    • SageMath for comprehensive math problems such as Calculus, Linear Programming, and Algebra.
    • Optimisation or Linear Programming problems: Free: OpenMDAO or SciLab; Proprietary: MATLAB or Mathematica
    • GNU Octave for solving linear and non-linear problems
  4. BI and Visualisation Tools such as Spark, Pentaho, SpagoBI, Dundas, Tableau, Cognos, and SAS VA. With increasing proximity of the data scientists with the top management, some of the work borders on how easily is a data science work explained to them.
  5. The work of a data scientist is greatly enhanced by her understanding of how several operational IT systems such as campaign management, sales force automation, and call centre dialer work. For e.g., a list of customers with high propensity to buy a product generated from Log Regression Model, needs to be exported or deployed to an operational CRM system used by Sales/Marketing Team. The CRM system may then send a mail, SMS or such with an inducement to buy. Knowledge how the operational IT systems are deployed and currently working provides invaluable insights of how and what a Data Scientist needs to focus her workflows.

Stats and Math Skills:

Possibly, at the heart of a data science lies the improving ability to crunch numbers. New techniques are being uncovered to handle common issues faced by data scientists. For instance, Support Vector Machine, a tool to classify, solves no new problem. But it solves it in more efficient manner, i.e. with least classification errors. Analysing text, voice, and image has been vexing. Advancements by way of adding layers to the Neural Networks (deep learning) has allowed solving hitherto unsolved ones. Consider, for e.g., ‘Dittory’. It is in a challenging business of helping customers discover similar unbranded apparel on the web using image search. Data scientist struggled to even detect a feature (e.g. mandarin neck) in an image. However, very high processing capabilities and very large datasets have changed the old and ignored Convolutional Neural Network (CNN) into a powerhouse of new capabilities. Almost 30 million apparel images across Indian eCommerce sites were used and rest is history.

The examples of SVM and CNN have an important message for data scientist: keep a keen eye on what is latest in the select important techniques:

  1. Data Exploration Techniques:
    • Uni- and bi- variate analysis
    • Correlation and covariance matrix
    • Simple tests of hypothesis such as z and chi square tests
    • Missing value treatment
    • Outlier detection and treatments
    • Variable transformation (also called Feature Engineering)
    • Confidence Interval Estimation
  2. Dependent Techniques:
    • Regression (including Cox Regression)
    • Log Regression
    • Other General Linear Models such as Lasso, Ridge, Elastic Net, Bayesian, and Polynomial
    • Linear and Quadratic Discriminant Analysis
    • Special Linear Models such as Kernel Ridge Regression, Support Vector Machines
    • Structural Equation Modelling
    • Experimental Design (Lift Modelling, Yield Optimisation, etc)
  3. Independent Techniques:
    • Clustering Analysis, including kNN
    • Factor Analysis or PCA extraction or Feature Selection
  4. Other statistical techniques and algorithms:
    • Forecasting / Time series
    • Survival Analysis
    • C5 Decision Tree Algorithm
    • Attribution modelling
    • Collaborative Filtering, Association Rules, Linkages
    • RFM Techniques
  5. Neural Networks
  6. Handling unstructured data (such as text, image and voice):
    • Indexing or Tagging
    • Web Analytics
    • Text Analytics: Sentiment Analysis
    • Natural Language Processing
    • Image Analysis
    • Voice Analysis
  7. Math:
    • Game Theory
    • Calculus
    • Linear Programming
    • Algebra

Domain Knowledge:

To be a good data scientist domain knowledge, systems thinking and cross industry exposure are important.

Domain knowledge is acquired with exposure to industry dynamics. Industries such as BFSI, Telecom, Retail, eCommerce, and Education have large number of customers and tech enabled data systems leading to generation of large (if not Big-) data. While application of IT Skills and Math/Stats Skills are nearly same in each of these industries, the business questions may be different. For e.g. Market Basket Analysis may be more important in the Retail Industry while Survival Analysis may be so in Insurance.

Some questions appear to be universal. For e.g. Churn Reduction. Yet, the approach and the variables that determine churn across industries would vary somewhat.  Consider for e.g. churn modelling in telecom and BFSI. The broad categories of predictor variables in both the industries may be Customer Characteristics, Purchase History, Customer Product Usage Data, and Customer Payments or Billing data.

In telecom, Customer Product Usage Data may cover variables such as Number of Calls, Outgoing-, Incoming-, Roaming-, International- Calls, Number of SMS, Total Minutes, Number of VAS activated or deactivated, Data Usage, and App Usage. The same in BFSI Credit Card Business may take a different avatar. It may refer to variables such as Number of Transactions, Categories of Purchases, Days of Card Usage, Value of Purchases and Number of Automatic Debit Instructions.

Identifying the specific variables for a good analysis calls for reasonable domain expertise.

Systems Thinking: Clearly, data science practice calls for an interdisciplinary approach. One cannot reduce churn (marketing analytics) and continue the same (poor) product performance (marketing analytics). Or reduce warranty (marketing analytics) without appropriate changes in reverse logistics (supply chain analytics). Or improve work-force productivity (HR analytics) without changes in production scheduling (production analytics).

A data scientist has to think holistically. No wonder the function has strategic importance and in several organisation, reporting directly to the CEO.

Cross Industry Exposure: I think having exposure to application of data science in two or more industries adds to the effectiveness of the practice; it is due to the ‘outside-in innovation’ effect. In fact, there are early evidences analytics may soon be no more confined to an industry; it will call for analysis of data from across industries. We are already witnessing firms aggregating data from across industries such as telecom, social media and ecommerce to improve search engine data analytics and consequent marketing campaigns.

Lack of cross industry exposure can be compensated by a study of successful application of IT, Stats or Math in different domains or industries. One can also augment by talking to peers in other industries and attending data science application conferences. The picture below shows successful application of one technique in a field has spawned similar application in other fields as well.

The question is, whether such cross-industry exposure should occur in the early, mid or late career of a data scientist. While there are no studies to back my hunch, I would avoid such exposure at the early stages of a data science career; focus in one domain in early stages has advantages.



Have the broad requirements of what it takes to be data scientist changed? No. The triad still comprises IT skills, Statistics and Math Skills and Domain Knowledge. However, several changes in the technology, science and business dynamics are forcing changes in the underlying characteristics of the triad. Data scientists are expected to increasingly spend time in data planning, use different and better technologies in data lifting, refurbish their stats and math armory with techniques that have never been used, perform holistic analytics that involves all functions within an organisation and use data / practices not just from the industry but from across the industries. The change calls for strategic thinking, high & quick learning and be outcome focussed.

Postscript: Reviewers of the above article pointed out that data scientists should have some very important soft skills and abilities such as communication, questioning mindset, problem solving attitude and influencing without authority. I agree and thank the reviewers.

15 Secrets of sales guys who show you the money

15 secrets of sales guys who show you the money

I have closely observed sales guys who perform and those who don’t.  Both appear to have near similar views in sales reviews:

  • The product is not up to the mark and customers are unhappy
  • The competition is several times better and their price too is lower
  • The organisation is not supporting on pricing / discounts
  • There is no marketing support
  • The leads that we have are bad, old, etc.
  • The targets are unrealistic
  • There is politics in the office that undermines sales

The above views don’t seem to differ across a variety of firms: successful or unsuccessful, big or small, national or international, or cutting across industries such as Engineering, Education, BFSI or IT.

So an interesting question is:  How do a few perform, while others seem to just whine? After interacting with a host of sales guys I think the magic is in their mental makeup. So, here are the 15 secrets of sales guys who show you the money. They:

  1. Believe that even if the product, price, marketing support, etc. is lacking, they could still sell. I recall one smart young sales guy telling me “So what if there are problems? I need to see how the product will benefit a customer and find that customer.”
  2. See tough circumstances differently. “What is the fun if it was easy?”
  3. Volunteer to take on tough markets and / or tough clients. “We will plant our flag there too.”
  4. Focus on long term relationship with the customers (for that matter with people). I remember one sales guy arguing with the delivery team on behalf of the customer at the customer premises! He lost that renewal order, but won every one thereafter.
  5. Listen to negative feedback keenly with a view to improve.
  6. Willingly take along rookies and even peers for sale calls; the former for free training and the latter for support.
  7. Read books such as “How to get better at sales?” or listen eagerly to success stories of anyone.
  8. Never stop learning at any age.
  9. Never give up on a customer. They usually follow-up several times. There is a research which suggests that customer usually do not start thinking favourably about buying until the fourth meeting. The good ones on an average do five meetings and the bad ones do about two.
  10. Start thinking “What did I do wrong in this case” when even after several attempts, there is no success. They still will not blame the customer, the product, the price, or such.
  11. Believe in the law of large numbers. “Boss, I get only five successes from a hundred that I meet. If I need ten successes, I need to meet two hundred.”
  12. Work for the company. “In sixth months we will be better than competition”.
  13. Persist in the face of setbacks.
  14. Mingle with other good ones in the office and outside.
  15. Are high on integrity. They do not cheat on reported sales numbers, expenses, etc.

Needless to say, the bad ones rarely see value in effort, avoid challenges, get defensive, give up easily, ignore feedbacks, feel threatened by other people’s success, usually hang around with other bad ones and change jobs frequently.

The good ones of course, give you comfort, dependability and performance. They show you the money!

Eysenck’s Personality Inventory (EPI) (Extroversion/Introversion)

The Eysenck Personality Inventory (EPI) measures two pervasive, independent dimensions of personality, Extraversion-Introversion and Neuroticism-Stability, which account for most of the variance in the personality domain. Each form contains 57 “Yes-No” items with no repetition of items. The inclusion of a falsification scale provides for the detection of response distortion. The traits measured are Extraversion-Introversion and Neuroticism. Continue reading →

Which hat are you wearing – the one of operations or development?

I have always wondered why certain managers accept change at some point and at other times resist? Well, it all depends on which hat one is wearing: the one of operations or development.

Imagine, for example, a safety-pin factory that produced 10,000 safety pins per day and it requires an input of per day of (1) 100 meters pin-wire, (2) 10 people, (3) Rs.1,000 as wages and (4) a plant.  Similar example is also a call centre: number of leads, number of tele-callers, amount of salary, and call centre infrastructure.

If one is wearing an operations manager’s hat, one’s role is quite focussed towards what I call as maintenance of status-quo.

  • First, one will maintain the minimum inventory level by ensuring raw material ordering on time, tracking transport, keeping adequate inventory, etc. One will raise alarm when the inventory goes below a certain mark.
  • Second, one will focus on ensuring the plant is maintained well, up and running, thereby minimising downtime. One will raise alarms when downtimes cross a certain threshold.
  • Third, one will focus on ensuring workers are trained adequately, present at work, motivated, etc. One will raise alarms when there is continuous absenteeism or engagement levels are down.
  • Fourth, one will ensure wages and incentives are paid on time. Will raise alarms when that is not the case.
  • Fifth, one will strive to set minimum benchmarks in production, e.g. minimum number of pins to be manufactured and routinely replace employees not meeting the minimum. Or keeping minimum the wastage, slack time, etc. Some minor operational efficiencies may strived for.

An operations focussed manager’s thoughts will be to keep the entire cycle running smoothly without changing the composition of the input and therefore the output.  The emphasis is on processes operating at the optimum level. If one asks an operations manager to increase in production, he will ask for increase in input parameters: more raw material / leads, more plants / call centres, more workers / tele-callers, more wages / salaries, etc. Seldom will he think of doing so without increase in input. Without input increase, he will view calls for production or sales increase as temporary and deliver it through temporary stimuli such as double wages, etc. and revert back to previous levels.

However, a business development manager will think in terms of increase in output without increase in inputs, by challenging the status quo or changing the processes significantly.  Such changes are probably risky, take time to implement – but they do result in permanent shift in productivity. For e.g. a question that a business development manager will ask are

  • Can 10,000 units of safety pins be produced from say 90 meters of wire (instead of 100)? How?
  • Can inventory be reduced to Just-in-time?
  • Can the plant be reorganised to produce more? Different routing? Different machines?
  • Can downtime be reduced by better predictive maintenance?
  • Can production be maintained by reducing the number of workers? Better quality workers but with also reduction in wages?

Developmental hat is unusual fashion for operations intensive managers, especially when they face tight timelines and goals. The man in operations is usually thinking in the short-term, not wanting to upset the “apple-cart”. So when you see a manager resisting change, probably he is wearing an operational hat.

My prescription for a beginner-manager is to wear any one hat at a time.  When the tides in inputs are great, then throttle the operations engines up. When it is the reverse, then open the developmental initiatives in full force. Personally, I think doing both is like changing the wheels of a running bus…difficult but not impossible.

Integrating fairness into Business Models

Undoubtedly, Ola is cheaper! One cant but feel the sanguine happiness with app based cabs after experiencing haggling with rapacious Auto-rickshaws and Yellow-Black taxis. But, Ola’s peak hour fare multipliers leaves us wondering if our karma is catching up? Are we condemned to suffer? Or will a sense of fairness from operators, even if market enforced, will eventually come to the rescue?

I used to pay Rs. 500 for rides back from airport to residence using FastTrack – a taxi-fleet operator – cursing often the fleece. Ola offered rides back home from Airport at about Rs. 200! I switched. So did several others. But recently when I landed at Chennai airport at midnight and ordered a Ola cab, the app asked me to confirm a 1.5x fare multiplier. The reason was ostensibly to motivate more taxis to operate at that hour so that at least I am not stranded. I promptly closed the app and retried. Same message. I closed and took FastTrack at Rs. 500/- instead!

Now, you may think I am quite irrational having paid Rs. 200/- more to FastTrack, when 1.5x times would have been still about Rs. 300? But at that minute I felt being unfairly treated…being taken advantage of…even feeling blackmailed. I am sure several others have felt somewhat similar. Worse – that feeling of having been blackmailed – lingers even today. Such unfair price practices occur in several other areas too: airline tickets, “tatkal fares” of Indian Railways, hotel reservations, etc.

Fairness is a component that needs to built into every business model as a fundamental strategy. Otherwise, consumers will treat business with contempt and at the first available opportunity switch. Consumers may, at times, switch even at his/her expense.

How does one integrate fairness in such business models? One may consider usual pricing for regular consumers. In fact, I think this will be one of the best techniques to increase loyalty base.

Let me know if there are other techniques?

A fish in a digital aquarium – the issue of Privacy, Capitalism and the Supreme Court

I was speaking in a panel moderated by the preeminent Professor Jagdish N. Sheth at ASB, Coimbatore. Of the several nuggets of wisdom and crystal ball gazing from Jag Sheth.. here is one:

I had just finished talking about how the digital trails of customers can be leveraged to create value for both the consumer and the firm. A student had raised concerns about how it may lead to issues of privacy.

The sense of and meanings about Privacy were stronger with the older generation and mattered a lot less with the young. In fact, my second daughter in school is a happy fish in a digital aquarium where people could peer. She has little qualms about lack of privacy. She is OK with, say, FB and Whatsapp, reading her texts, data etc. and went on to suggest that it was even better if they used it to her advantage. However, she was not OK with harmful abuse of private information. Clearly, the younger generation had different view of privacy and it was lot more liberal than mine.

Jag’s response was that it was not far when personal data including digital footprints will be a private property up for sale. So capitalism will exploit every asset including personal information. The question is how it can be contained. There is enough room for suspicion that with lax legislation (influenced by powerful lobbying) and enforcement can do so. Conscious capitalism may partially solve the problem; but it is practiced by only a few and we should be worried about the ones that will exploit unscrupulously..however small the set may be.

Of all the forces, Jag Sheth reposed trust in the Supreme Court. He recalled how cigarette firms are now moving away from making cigarettes due to Supreme Court considering penalties up to $280 billion. The fundamental argument is that since the firms knew that smoking caused cancer and willfully marketed them, they should be liable for the consequences. It is quite unlikely that such high penalties may be imposed since they may be considered “excessive”…implying the penalties, while they function to compensate for the damages caused to the aggrieved, should also be reforming the defendant.

The analogy is that firms that use data to the detriment of a consumer will face stiff penalties and then may act as a deterrent. True. But my argument is that it takes a long while. Just as took a long while to ban cocaine from Coke or consider terrorism as a heinous crime against the State.

But that is a price we pay for democracy.

What do you lose with Free Basics?

Much debate is on between folks that care for Net Neutrality and Zero Rating (e.g. Free Basics). I would bat against Zero Rating for one reason: “Free” makes people choose inferior options irrationally.

Consider you have to choose between (1) Rs. 1000 worth Flipkart gift vouchers for Rs. 200 or (2) Rs. 300 worth Flipkart gift vouchers for Free.You are possibly thinking – who in the world will choose the first one? You are not alone. And possibly part of the majority that will like Free Basics.

But the wisdom is in this: the first one is worth Rs. 800 in profit when the latter is only Rs. 300. That is what “Free” makes you lose.

As of date Free Basics has very few sites/apps and comparable to the Rs. 300 worth Flipkart gift vouchers above. The vast unfettered internet having over a billion apps/sites is comparable to Rs. 1000 worth Flipkart gift vouchers above.

For the uninitiated:

Free Basics from Facebook is an initiative where Facebook (and Reliance) lets a select set of apps and sites accessible free of data charges. Airtel Zero is another such plan. All such are called Zero Rating plans.The belief is that free data access will let more users join the digital wave and benefit the nation. Net Neutrality enthusiasts are crying foul. The “walled garden” lets Facebook access critical user and usage data and it will eventually force listing sites to cough up money for access to users.

I loved this debate between Mahesh Murthy and Facebook.

What is your Innovation Agenda? Part II

There were comments from readers about the last post on Innovation Agenda. The crux of the mails is this: the post fails to mention:

  1. Disruptive or radical innovations (e.g. Whatsapp, railroad, etc.). I do not agree that it was required. The post’s  core idea was about firms committing to bad / pointless innovations since they are not closely linked to values needed by paying customers. A firm may be engaged in sustainable good innovation and adding value to both customers and the firm and yet be potentially wiped away by disruptive innovations. The best example is Indian Banking systems. Banks such as HDFC Bank, etc. are in the path of sustainable innovation and one of the most profitable ones. Yet they face currently a potential threat of a disruptive “whatapp moment” (see YouTube video “Disruption in Financial Services: Nandan Nilekani at TiE LeapFrog”
  2. The improvements have to be discernible or observable to the customer. E.g. when I typed the word “innovate” in Google it turned 2,47,00,000 results in 0.82 seconds. Now the question is how will the value to me change if the results are twice in number and half in time? And importantly would I notice it? I agree i missed this point.

Thanks everyone…

What is your innovation agenda?

Importance of product innovation is well documented. It leads to delivery of better value to customers, helps gain market share and increase top-line or the bottom-line.

A casual look at several new product failures suggests that firms struggle to understand what should the Primary Innovation Area (PIA) be. And more struggle is evident in new feature failures – demonstrating perhaps even Allied Innovation Areas (AIA) are unclear. Such lack of clarity drives firms towards inevitable bad innovation. Why would that be? Before we delve into it, let us understand how PIA and AIA in a product drive core and augmented value to customers respectively.

For example, in India, the Core value of an automated two-wheeler is cost-effective and yet personal transportation (when compared with car). The relevant PIA may be fuel efficiency, especially combustion (e.g. MPFI). A two-wheeler manufacturer’s sustained innovations in combustion technologies will help attract more and consumers, gain market share and grow the business. Similarly, an Augmented value may be secure feeling about available fuel.  AIA may be in gadgets that shows “kilometers left to empty” instead of a simple fuel indicator.

A few more examples of PIA and Core value of product innovation:

  • PIA: Google’s Page Rank Algorithm -> Core value: delivering relevant search results.
  • PIA: Reliance Jio’s optical fibre network -> Core value: faster connectivity.
  • PIA: Samsung’s handset (e.g. a 5G phone) -> Core value:  faster connectivity and better display.

A few more examples of AIA and Augmented value of product innovation:

  • AIA: GM car’s interiors – OnStar FMV that replaces one’s rear-view mirror
  • AIA: Google’s search results page – Easier navigation and layout.

Performance and continuous improvements in Core value (driven by PIA) is critical for even consideration of product; otherwise a product would be rejected. Post consideration of a product, performance and continuous improvements in Augmented value (driven by AIA) will pip the decision in favour among the alternatives. Unless a firm clearly understands the linkage between PIA and Core Value and similarly between AIA and Augmented Value, it will either fail to innovate or do so badly. Bad innovations will lead to losing market share to good innovators.

So why would firms have poor understanding of areas to innovate (whether Primary or Allied)?

  1. Innovation area is not tightly linked to customer needs. Innovating teams often believe they know what consumers want when in fact they are innovating what they could and not what they should  (I have several hilarious ones as e.g. – iPotty for toddlers for one!). One of the most striking in the list of failures is QR Code in advertising. Even the tech savvy who produced it and understand how to use it (Hint: scan it using an app in your smart phone), would not use it since very few of them like the intrusive in-the-face advertising.
  2. Idea is not aligned with the core brand. Why would LG the electronics giant dabble in personal care, Cosmopolitan magazine dabble in yoghurt (yes..yoghurt!), Colgate dabble in Kitchen Entrees,etc.?
  3. Innovation in one area lowers value in another. Consider a new property portal. The value of the site goes up when several people search for properties and when there are several properties listed. Primary revenue come from builders who list their properties. So should a firm innovate in delivering new properties that match search queries or innovate in delivering leads to paying property developers? Finding that balance may not be easy.
  4. Part monetization of a value chain. Consider LinkedIn. Three steps – profile creation, search and contact – create value to members. But only the last one is fully monetized. One runs the risk of innovation agenda being lopsided – concentrating more in areas that generate value. In this example, a firm may innovate more in how to contact better, e.g. through better chats, ensuring contacts numbers / email IDs are valid etc. It may run the risk of innovating less in areas such as delivering the right search results or getting relevant profile information. There is one another risk when revenues and innovation areas are not directly connected. The causal links are less evident and hence innovation may not be not fruitful.
  5. Not within the core competencies. We realize today what Whatsapp did to SMS revenues of telecom firms and also now to audio calling. Development of Apps may not be their core competency.
  6. Complex competencies have to come together. For firms such as hospital, core innovation may mean something like this: “reduce death rates in operating theaters and procedures”. While this may be relevant and meaningful to patients, it may call for teams from medical, technological and other disciplines come together. Typically quality of innovation suffers when several disciplines are called for.

The solutions to above issues are not simple. Having a Product Innovation Charter (PIC) helps. It is a critical strategic document.  It is the heart of any organized effort to commercialize a new product. It contains the reasons why an innovation project has been started, the goals, objectives, guidelines, and boundaries of the project. It is the “who, what, where, when, and why” of the product development project.

Having a Product Innovation Charter helps identifying (1) target areas for innovation (2) strategic objectives that will be met with measures of success, (3) programs of activities to be selected for achieving the goals, (4) areas of competencies that will be leveraged (5) special conditions, restrictions or mandates. Researches suggest that firms that have PICs have much higher chances of avoiding the above referred issues that lead to bad innovation.

Probably the simplest questions are these: (1) have we the proof that customer need it and will buy from us (2) have the capabilities to build and sustain it and (3) will it make real difference to the top-line or the bottom-line?

Are you seen as a champion leader or imposer by your team?

Every important initiative to be successful requires a champion-leader. The method of a champion-leader is different – persuasive as opposed to imposing. S/he begets a team that is high on ownership and self-directed efforts. S/he delivers results. But champion-leader’s team, even if competent, may perceive him/her as an imposer, more so if the leader is CEO. Or, CEOs may be believing that they are persuasive when in fact they may be imposing. It is difficult for leaders to self-correct because, teams may fake ownership and motivation.

Change agent: Champion or Imposer?

How does one assess oneself whether one is seen as a champion leader or an imposer? Here is a self-assessment tool-kit. But a story before that!

One of my second generation entrepreneur friends, who had recently taken over the reins from his father, was talking to me about his (unsuccessful) attempt to scale his foundry business. It required automation of a few processes. Essentially, change.

He was an MBA from one of the premium US b-schools with a specialization in running family business. So he armed himself with a near text book approach. He (1) created urgency within the organisation for a rapid change, (2) created & communicated a clear goal and execution plan, (3) formed a core group of members, with one clear “spoc” to drive every activity, etc. (see for more: John Kotter’s classic HBR article “Leading Change. Why transformation efforts fail”)

The core group appeared high on ownership- talking positively about potential of the change and how great the process automation idea was. It even cited early wins, e.g. Value Stream Mapping had identified and removed certain non-value adding steps – lulling our friend into believing that the course was correct! But after a few months into the change initiative, little had significantly moved. Detailed task lists were drawn and reviewed in regular weekly meetings.  He was rather lamenting about how in spite of championing the change, his team did not deliver. “It is time to take action” he said and promptly fired the foundry operations manager.

The new foundry manager, now part of the core group, had been observing the dynamics for several weeks. My friend suggested “I had to give him time to settle.” The foundry manager soon blamed the poor managerial skills of his predecessor, made some changes in the execution plan and suggested that some extra rewards would make things work faster, which was agreed to. He later assured that the employees were “falling in line” – citing even more success. Other pressures took over and the process automation initiative was given a quite burial. Probably, this was not the first one.

When we were discussing this case much later, we were intrigued as to how an entirely internal-to-the-firm initiative did not move enough? I had to either doubt my friend’s leadership style or the team’s sincerity in adopting the change initiative. I had little doubt on the former. Probably, the team was faking ownership? My friend was steadfast in believing that it was not the case. So I drew up a list of evidences that may point to the contrary.

Imagine yourself as a CEO or Head of a division. Recall a few important change initiative and read the statements below carefully.

  1. Your team downplays any crisis news / poor performance metric. Worse it “shoots the messenger”!
  2. Your team engages outsiders, e.g. consultants, to bring unwanted information.
  3. Your team rarely discusses crises in meetings that requires a change initiative.
  4. Your team cites downside of a change initiative more often than upside.
  5. When you propose a change initiative, somehow you suspect that you have several  “yes-men” faking ownership.
  6. After the ball is set rolling, your team has not conducted detailed communication about the change initiative to employees at large.
  7. Change projects are driven by just one or two top leaders of your team, even though it calls for at least several others to form a core group or a guiding coalition.
  8. Even the two or three who are part of the change team – they do not seem to have agreement on how it is to be done.
  9. There are very few junior executives are members of the change team.
  10. The predominant motivation technique is cash incentives.
  11. There are members of the team who are just paying lip service and not demonstrating real change in behavior? Or such members do not encourage his/her team to change behavior
  12. Review meetings is not offering course – correction, or  analysis of performance.
  13. Review meetings are mostly about a big list of tasks – whether done or not.
  14. Team shares little inputs from the field, both negative and positive.
  15. At least one or two members are asked to leave for poor change management

Now please count the number statements to which you could say “Yeah..that happens”. If they more five, then the next-in-line team lacks true ownership.

Changing jobs and sliding fortunes..

What type of employee are you?

  • I will shift for higher position or pay as and when an opportunity arises. In the first thirty years of my career, I would have shifted about 9-10 odd jobs with an average tenure of 3 to 4 years.
  • I will stay in a job to learn and deliver value, even if there are better opportunities outside. The world will eventually value and reward such a person more. In a thirty-year career, I would have shifted about 4-5 jobs with an average tenure of 7 to 8 years.

Now, which one of the types is more likely to be a CEO? If you bet on the first-type, you may not be alone but you may also not win the bet.

We have been recruiting for senior leadership positions for a while now, requiring perusal of  several hundreds of resumes. It was indeed a serendipitous discovery: if one shifts jobs too often, the chances of becoming a CEO diminish. Now, you will ask me how I deduced this hypothesis. Good you asked!

We reordered a very large pile of resumes by age of the candidates, regardless position or job applied. For each age class of candidates, for e.g. 40 years old, we further split the set by designations / positions such as Vice President, Associate Vice President, General Manager, Sr Manager, etc. As the last leg of the exercise, we computed average number of job changes for each designation / position. We have now, for each age and position, the average number of jobs. Result: Higher the position, fewer the job changes. That is, if one was 40 years old and a Vice President, the number of jobs changes would be far lower than another person at 40 years and a Manager.

A similar result is also hidden in the HBR article “The Best-Performing CEOs in the World”, November 2014. Of the one hundred CEOs, a whopping 79% were promoted from within the company. Only the rest were hired from outside to lead as the CEO.

Why would this phenomenon be true? There may be several reasons:

  • Jobs require depth in thinking and attention to detail – traits that accrue and hone only with long experience. And such traits are reasonably transferable in any job.Therefore, Masters of one trade go up the ladder; Jacks of many trades keep whining.
  • Persistence is another trait of a leader – demonstrated only with staying resolute in one firm. The question in interviews are usually about whether one left an organization better prospects outside or since there was “heat” inside the organization?
  • Loyalty is rewarded. Job hopping creates some inevitable uncertainty in the mind of the board. The first question in my mind, when I see a person with several job hops is to see if the person is a corporate mercenary?

So, if you want to become the CEO, just pause before you send your resume out. Probably, that pause should be about six or more years longer than the time it takes to read this blog…

TESCO Debacle: what should Customer Analytics now be?

I was reading Guardian’s post about launch of a criminal investigation against Tesco. It is being accused of profit overstatement to the tune of UK Pound 263m. The news comes amidst other news of Tesco’s struggle to keep its head above waters due to stiff competition from discount chains, such as Aldi, and etailers, such as Amazon. Sad.

I felt sad because Tesco is a paragon of use of Customer Analytics and such analytics did nothing to prevent this debacle? Where is the competitive advantage from customer analytics that should have staved off competitive pressures? A HBR blog post further asserts that the set of loyalty programs, promotions and marketing campaigns – once a driver and determinant of success of Tesco – is now an analytic albatross.

Probably it is time to revisit the role of customer analytics. Is it just for creating clever loyalty programs, suave marketing campaigns and sly promotions? If customer data is used solely for the purpose of mining more value from a customer or to create exit costs, consumers will realise it sooner than later. No wonder several consumers have multiple loyalty cards, jump ship by using price discovery bots, hide data, block mails, not lift calls, etc.

Customer Analytics should pervade all processes of marketing: need identification, new product development, awareness generation, effective supply-chain/distribution, pricing, selling, promotions, purchasing, consumption, divestment, and feedback processing.

A review of applications of Customer Analytics in retail, banking, etc. may reveal that predominant application areas are in selling and promotions. Telecom firms use it not just for promotions and selling but also to understand user-needs and create products. Pharma firms seem to predominantly use it for new product development and improvements and less on selling and promotions.

It is time we make Customer Analytics pervade all marketing processes.

Wisdom and Analytics

I loved Susan Etlinger TED Talk on “What do we do with all this Big Data?” The central idea of her talk was that data becomes “meanings” when there is wisdom. What is wisdom though?

Wisdom is the ability to think and act using knowledge, experience, understanding, common sense, and insight. Wisdom makes one to infer with greatest degree of adequacy. Wisdom calls for understanding one’s own limitations, biases and lenses in perceiving truth. Wisdom requires understanding of people and their circumstances. Wisdom, for me, is the right blend of reason and passion.

Google Flu Trend (GFT) is regarded as one of the best examples of Big Data analysis. But recently GFT has been in headlines for wrong reasons as well. With GFT missing prediction of an unseasonal flu in 2009, I suspect that all the analysis was just about predicting winter. A study claims that GFT also has the magnitude of prediction not so accurate – at times over predicting prevalence of flu by 50%. Are we adequate in evidences to support the inferences? If I was a flu-drug manufacturing company and relied on GFT, I probably may be sitting with a pile of unsold inventory of drugs and frowning more at a relatively happier nation.

One of the most potent contributors of poor meaning derivation from analytics is our own cognitive limitations and biases. Take for example, confirmatory bias so lucidly brought forth by P C Wason’s experiment in 1960. Let’s say I requested you to guess the rule behind a set of numbers: 2, 4, 6. You may construct other sets of numbers to test your assumption about the rule I have used. For every three numbers you bring up, I will say “yes” if it satisfies the rule and “no” if it does not. In Wason’s experiment, several offered number sets such as “8,10,12”, “20,22,24” etc. imagining it to be a sequence of even numbers. Wason kept offering “yes” and subjects, with each “yes”, were feeling surer that the set was indeed a set of even numbers. However, Wason’s rule was merely a set of increasing numbers. Mind seeks evidences for what one wants to see and not necessarily the truth.

Understanding people and their circumstances calls for passion. Susan Etlinger brings this point out so poignantly in her talk. Her two year old son, who is autistic, is declared to be a having a nine-month old baby’s development level, based on standard data of communicative gestures, eye-contact etc. However, there are sufficient other evidences (e.g. her son’s searching for meanings of words in Google) to show that the development may not be as low as was diagnosed earlier. She warns “..this is what happens when assessments and analytics overvalue one metric — in this case, verbal communication — and undervalue others, such as creative problem-solving.” Master Etlinger’s case at once demonstrates the importance of adequacy, pursuit of dis-confirmatory evidences, and requirement of understanding people’s circumstances and context for inferences from analytics.

Clearly, wisdom is a prerequisite for good analytics.

Oracle: Two is better than one CEO?

BBC ran a story that when Larry Ellison stepped down, Oracle board announced two CEOs as successors. Surprisingly, Co-CEO model is not really rare event. You may have up to five men contacting you as CEO from a firm called Mobi Wireless Management! Well, I am not sure how big Mobi is, but it has grown in three digit percentages in the recent years. Samsung, admittedly, is big and has three CEOs. There are probably about 1,000 listed firms across the globe that have more than one CEO. Co-CEO models are quite common in M&A cases, family owned firms, co-founded management and in firms that are experiencing leadership transition. There is an increase in the number of firms that adopt a Co-CEO leadership model.

While I may have some selfish interest in propagating such a trend, it is worthwhile to delve a bit deeper. I have been brought up listening to an old Hindi idiom: there can be no two swords in one sheath.  So how does it work? The question is really how it impacts shareholder value creation. The sub-question is when is it maximized?

Co-CEO of Willis North America Mario Vitale says that it will succeed if the the two or more CEOs complement each other in skills. While there are several other reasons, he states, with some excitement, another one: for the first time in his career he need not carry Blackberry to his vacation! But the Co-CEOs of  RIM (Research in Motion, that owns Blackberry) also did not need to carry theirs: they got fired recently for bleeding market share to Apple and Samsung.

Success of Co-CEO structure requires some ego-less understanding among the CEOs. Possible? I guess yes. Not many with bloated egos will even make it to such top slots. Of course, even without egos in clash, decision making can be a bit paralyzed. That may be avoided if the board clearly sets the agendas, specifies the scope of operations and spells out clearly the responsibilities – exactly what Oracle board did.

Co-CEO models are successful also when CEOs have different and far-flung geographical responsibilities or product-market domains.Whatever be the responsibility or scope, a very good understanding is still crucial. But understanding can also mean compromise – a killer of innovation.

Professor Stephen Ferris of the University of Missouri states a few other benefits. Most firms with Co-CEO structure do better than single CEO firms. Compensation for Co-CEO model is actually lower than single CEO model. Market reacts positively to Co-CEO announcements.

So there is reasonable cause for good cheer for Oracle. I hope it does great..

Preference Reversals: Oh Yeah? Did we change our mind?

So how can one’s change decision change based on how information is presented?

Recently, we encountered a strange situation in recruitment. We were looking at two resumes separately once and later together. When we evaluated separately we rejected Candidate A; but a week later when we evaluated jointly, we accepted him over Candidate B. Here is all the data we had: Candidate A had developed several marketing campaigns for about 5 years and had no college education. Candidate B had developed marketing campaigns for under 2 years and had a college degree. Everything else, including the firm that they were working last, was similar. The recruitment team cried foul.

Why would we reverse our preferences so? Let me provide another example:

Used car 1 has run 1,000 kilo meters and there is a dent on the front hood.

Used car 2 has run 9,000 kilo meters.

Please imagine that you evaluated the two options separately, as if the other option did not exist. Which one is your choice? Probably you will choose the second option. And now imagine evaluating the options together. Probably you now choose the first option. Research confirms that you are not alone.

Hsee, Loewenstein, Bount and Bazerman in their research state the reason to be “non-evaluability”; “some attributes are easy to evaluate independently, whereas other attributes are more difficult to evaluate independently.” In the above example, it is difficult to evaluate the impact of “a dent on the front hood”. Therefore, when evaluated independently, car 1 would be less preferred than car 2. However, when jointly evaluated, “no dent” and “a dent on the front hood” is not only comparable but the relative advantage of 1,000 kilo meters springs out when compared to 9,000 kilo meters.

Here are two more examples:

1. (a) Rs. 10,000 flood relief to you and Rs. 11,000 to your neighbour, or (b) Rs. 8,000 flood relief to you and your neighbour.

Possibly, option B would be chosen more when evaluated singly; there is a sense of equity which is absent in option 1. However, when jointly evaluated, I think we will choose the first option merely because it just has higher pay-out than the second.

2. (a) mp3 player for about 4000 songs with THD of 0.0005%, or (b) mp3 player for about 10,000 songs with 0.02% THD.

Lower the THD (Total Harmonic Distortion) better the fidelity. Again, the THD is difficult to evaluate independently. What does 0.02% THD mean to us when this this information is presented in isolation? Therefore, the number of songs predominates decision making when the options are presented singly. However, when the options are presented together, 0.0005% THD is far more superior than 0.02% THD and hence the preference shifts to the first one.

Clearly, the implications in for search engines are high. There is definite requirement of presenting key information in a format that is understandable and comparable.

For us, the age old adage of having choices on the table to compare is important. More so if the alternatives have attributes that are difficult to understand or evaluate.

We went ahead with Candidate A; so much for consistency in decision making!!

Wait for more…

Analyse this! An analysis of an analysis: Part – 2

In the last post, I indicated that regardless of equality of information, two persons may decide very differently based on what decision strategies each adopts. While there are innumerable strategies, a few of them stand out for mention.

You may want to revisit the table of analysis, for it will help in understanding the outcomes of the strategies discussed below.

Weighted Additive Strategy: Imagine you are extremely capable decision maker (i.e. list all alternatives and attributes, assess each alternative against each attribute and place weights across a set of attributes). Imagine your “importance” for floor area is 15%, location 25%, price 35%, quality 10% and design 15% (if we total weights of all the attributes 15%+30%+30%+10%+15%=100%). Your choice will be essentially that alternative that is maximum of performance x weightage. E.g., for House 1, the value is (0.15*1+0.25*1+0.35*7+0.10*5+0.15*1 = 3.8). Likewise, for houses 2, 3, 4 and 5 they are 4.35, 3.00, 4.80, and 4.90. House 5 will be chosen since it scores the highest. I am reasonably certain, considering what you have read just now, you have suspicion about your capabilities in such decision making. Your other suspicion that most would not engage such an exercise to decide is valid too. But it is undeniable that such a strategy would be the best in maximising value.

Lexicographic Strategy: The alternative with the best performance in the most important attribute will be selected. Consider price was the most important. House 1 will be selected. Even though, House 1 performs rather poorly in all the other attributes, they could be overcome by a great performance in the key attribute.

Satisficing Strategy: Alternatives are considered sequentially in the order they appear in the mind. The consumer would also keep a cut-off for performance in any attribute. Consider a minimum cut-off is “2”. Since, houses 1 & 2 have at least one attribute with “1” as performance value, they will be eliminated. House 3 will be selected. Alternatives that score extremely well in most attributes but lower than even one attribute will simply be eliminated (consider the plight of House 5).

Elimination by Aspects: It combines aspects of Satisficing and Lexicographic. The options that do not meet the minimum cut-off in the most important attribute are eliminated. In our case, assume price was the most important and minimum cut off is “2”. House 2 & 5 are quickly eliminated. Houses 1,3 and 4 are still in consideration set. Now the next most important attribute is selected. If location were the next most important attribute, then House 1 is eliminated and houses 3 & 4 remain. If no other important attribute remain, then based on Lexicographic Strategy, the alternative that performs the best in the most important attribute will be selected: House 4.

As one can see the choices changed depending upon which decision making strategy a consumer used, regardless of equality of information available for decision making. The implications are extremely important for the way alternatives are presented in a store or a search engine.

In the next post we will see how preferences or choices may change depending upon how information is presented.

Solutions to the puzzles in the previous post

If you have not solved the puzzles at the end of the post “Why do I call them Insight Miners?” here are the solutions.

Puzzle1: Suppose you are given a candle, a match box and a set of pins/tacks. Your job is to light the room by attaching the candle to a wall.

Solution: Empty the matches, fix the match-stick tray on the wall with pins/tacks, light the candle and affix on the tray.


Puzzle 2: Connect the nine dots without lifting the pencil and using no more than four lines. The nine dots are in three exactly parallel rows with each row having three dots.


nine dots

In the above solution, there is another form of functional fixedness that has cropped in. Further, there is a possibility to connect them with three lines instead of four. Anybody who solves these two problems gets a free dekko of my next post.

Puzzle 3: Connect the three words – aid, rubber, wagon. Answer: Band. More here.

Why do I call them Insight Miners?

At matrimony.com we started a new cool group called the Insight Miners. I am often asked a question why are they called so when the usual nomenclature may be Data Miners or Business Analysts.

Well, I think it is an oxymoron to call them data miners? What is the necessity to mine soil? One mines for precious things such as diamond or insights from soil or data respectively.

I also have another reason why I would prefer to call them Insight Miners. Data usually tells what has happened and rarely why it has happened so. For e.g., at matrimony.com we collect information whether a male member is a regular smoker, an occasional smoker, or a non-smoker. One of the insight miners came up with a very counter-intuitive data on preference of women. They preferred, of course, a non-smoker the most. But surprisingly, the preference for ‘occasional smoker’ was lower than that of ‘regular smoker’. This piece of data became an insight when a separate short-survey revealed why women preferred so: they thought men who declared themselves as occasional smokers are actually not so honest. Data becomes an insight when such causal relationships (i.e. answering why) are established.

The short survey was not the first thing that occured to us when the problem surfaced. Several probable reasons were put forth. In psychology, insight is the sudden discovery of the correct solution following incorrect attempts based on trial and error. This is just like the way one mines for diamonds.

So what does it take for an Insight Miner to succeed?

Of course, the basic is data mining 🙂  One becomes an amateur with skills in logic, deduction, induction, etc. But it takes a lot more to become a pro.

A pro is when one has broken functional fixedness, has spatial ability and has verbal ability! What are these?

Breaking functional fixedness is when one uses objects in a way not accustomed to. Suppose you are given a candle, a match box and a set of pins/tacks. Your job is to light the room by attaching the candle to a wall.

Spatial ability is when one can think outside the box for solutions. Example, connect the nine dots without lifting the pencil and using no more than four lines. The nine dots are in three exactly parallel rows with each row having three dots. Or imagine a square with dots occupying the corners and centers of each side. The ninth dot in the center of the square.

Verbal ability is to actually connect seemingly unconnected words. Crosswords are good examples. Consider connecting the three words – aid, rubber, wagon.

If you have not cracked the above three…read my next post (and desist from applying for an Insight Miner’s post at matrimony.com).

A good practice borrowed from Sports Analytics

I was recently reading about Roland Beech, the performance analyst of Dallas Mavericks. He sits in the benches along with the players and observes them up and close, before and after his performance analysis of players (Sports Analytics). So do analysts from SportsMechanics India. They literally travel along with the teams. You may ask ‘so what is the advantage’?

I think the key advantage, when the analysts are so close with the teams, is an immense feel of the real dynamics of sports & its players, its intricacies & nuances, and ultimately understanding what really works. Consider a match of Chennai Super Kings (CSK) with Mumbai Indians (MI). Lasith Malinga (MI) has destroyed the top three within the power play of CSK. Real time analytics from SportsMechanics will suggest which player may be the best to face Malinga. But if the analyst is deeply journeying with the CSK team, he may suggest the next best player. That is not all, the analyst can include several key variables into the modelling engine that were ‘visible’ in the trenches but not back in the ‘war room’. Says founder of SportsMechanics, Ramky “If Shikhar Dhawan feels that 150 is a par score and our engine says it’s 180, we ask the team which option they want to go for. Then we give the team five ways to approach both par scores.” That is the advantage of analysts fighting it out in the trenches.

Often, we miss the benefits of such proximity in business analytics. I find analyst teams work in isolation and have little understanding of what really is needed and what works. Many business leaders keep analysts right next to them giving a feeling that such analysts are actually ‘in the trenches’. But I disagree. The trenches are actually the last mile connect of the business operations / development: the telecallers / sales force, the programmers, etc.

At matrimony.com the business analysts (called Insight Miners) work very closely with the business teams and are responsible not for insights but for actionable insights. The insight miners have to implement / action the insights with the business teams. Recently, the Insight Miners developed MIMA (Matrimony.com’s Intelligent Matchmaking Algorithm), a machine learning recommendation engine working on big data. The key metric for their success is not just developing, but also having it implemented with scores of technologists and ensuring that key metrics move up significantly. The team may generate lower number of insights but whatever generated will make much more impact.

Of course, there is added benefit of higher motivation due to every insight fructifying into action.

Analyse this! An analysis of an analysis: Part – 1

In the previous post “Two tales: A buy and a no-buy”, we saw how amazed was I with my student’s complex & incredible spread-sheet that had a set of car 15 brands and for each brand a set of 15 features, e.g., mileage. Her idea was to use that evaluation sheet to choose a car.

Probably most of us would engage in such an analysis because it directly elevates the quality of the buy. We would engage so, if not so elaborately, at least briefly; and if not in a physical spread-sheet, at least in a mental one.

Such ‘spread sheet’ analysis, especially how well it is constructed and used, has a serious implication: it impacts an individual’s quality of decisions and thereby her/his quality of life. So let’s analyze the analysis!

Instead of the gigantic 15×15 table for a car-buy situation that my student produced, I have constructed a very simplified and hypothetical home buying task illustrated in table below.

Evaluations of objective information are subjective. For an individual a floor area of ‘1,200 sq ft’ may be ‘very small’ and ‘1400 sq ft’ ‘very large’. Similarly, a kilo meter away from office may be ‘very far’ may be  and ‘very near’ may be just two blocks away. That is, for e.g. five houses within a square kilometer zone, within 10% variation in floor area, price, etc. may result in the above table of analysis. The values in the table are subjective and on an imaginary (and personal) scale of 1 to 7, where 1 is the lowest value in terms of benefit and 7 the highest. Individuals will most likely not use such a scale but it has some utility to us in understanding how decisions are actually made.

We will now analyze how the quality of the decision may get impacted.

1. Larger the set of alternatives (in the above example there are five alternative houses), more difficult will the analysis of the spread-sheet be. The set of alternatives is a function of how good the memory is and how deep & wide will search for information be. I am told that a lazy person will rely more on memory.

The advent of search engines and portals may result in a large number of alternatives. For instance, IndiaProperty.com lists as many as 25,000 new properties to be purchased in my city of residence, Chennai.

The type of problem being solved will also impact the number of alternatives considered; for e.g., in a home buying situation a person will want more alternatives than in a cold-beverage buying situation. In the latter case, it may just be repeating the last decision.

2. Analysis will be more difficult with larger number of features (in the above table they are “Floor Area”, “Location”, etc.). I just saw today a car advertisement that listed twenty eight features and called itself “fully loaded”. It included features such a “puddle light”, “remote key-less entry”, etc.

However, a person may list a few to several features for an alternative. The list depends upon a person’s ability to pay attention, learn, keep it in memory, etc. The list also depends on what is being bought.

3. A feature may be more or less important than another. For e.g., “location” may be more important than “floor area”. Possibly a person may have a couple of features that are most important. Large number of important features will render analysis difficult. In such cases, it may be difficult to decide upon a suitable alternative since there may be no alternative that match such strict criteria.

If there are too many important features, there may be too few weak features that can possibly be traded-off. It may also be emotionally difficult, e.g., if both price and quality are important, what will a person trade-off?

4. Each of the features has a potential pay-off to a person. The question is whether is person is clear about such pay-offs. For e.g., a person may be clear about whether a “Location” is near or not. But, one may not be so sure about “Quality of Construction”. Lack of clarity about the potential pay-offs hampers quality of decision.

5. Sometimes a person may not have much information about a feature; for e.g., what exactly is the quality of construction? In such cases, the feature runs the risk of being rated poorly or inaccurately.

A person may ignore information about several features. For instance, s/he may consider “Internal Design” to be the most important and ignore collecting or analyzing information about every other feature. In which case, alternative house 2 will be selected.

6. Sometimes an alternative will have features that are absent in other alternatives. For example, only one house may be in a high-raise building and the rest may be single-storey. More the non-comparable features, more difficult the analysis.

While we just saw how the construction of the spread sheet may impact the quality of decision making, it is also true that even if the spread-sheet was constructed exactly similarly, two individuals may decide very differently based on what decision strategies each adopts.

More of that in the next post