REVIEWER’S NOTES: We came across two very enriching books on the subject of artificial intelligence and the various issues surrounding this subject area. We also observed very strong complementarity which leads us to consider attempting a single review combining the wisdom of both books. This review is still a work in progress.
Gary Marcus & Ernest Davis’ AI REBOOTED takes a long meandering path to examine the historical and philosophical underpinnings of what we know as AI today, and gives the reader a deeper look at the major shortcomings, it merely touches upon conceptual ideas that may come in useful as thumb-rules while fixing AI’s issues. It focuses on concept, and stops short of providing actionable ideas.
Kartik Hosanagar’s A HUMAN’S GUIDE TO MACHINE INTELLIGENCE on the other hand does include one (of three) of its parts on the overall landscape of machine-based intelligence. However its primary focus is about examining AI’s many issues objectively, testing and debunking the many myths and perceptions surrounding them. It then proceeds to provide a very clear framework to think about AI’s issues and our actions aimed at regulating and minimizing the impact of its frequent demonstrations of infallibility.

A multitude of interesting books about AI and Machine learning have flooded the market in recent times. They usually fall into one of two categories –

  1.   The vast majority of those that (sadly) are interested in merely cheerleading the bandwagon developed by technology marketers with cursory understanding of AI and machine learning technologies. They merely propagate the prevailing myths about how these technologies will transform humanity overnight and magically solve every intractable problem out there.
  2. A quiet minority of new books built on deep thinking, written by authors with deep academic research credentials and time spent in studying the path these emerging technologies are on. Our analysts look out for such books, many of which they love to review for our executive audience.

A common thread among the latter category of AI books is often replete with cautionary tales of overpromises made to market a new technology, narrow deeply restricted solutions used to demonstrate new capabilities and pressures on public companies to take such products to market and monetize them as quickly as possible. Shareholder pressures have shaped most of what the largest technology companies have put out as AI-enabled products.

To be clear, its still very early days for AI and machine learning technologies and no author could possibly visualize the directions these technologies might take in the foreseeable future. What is much clearer however, is the fact that a deeper analysis of present approaches to building AI based products and solutions, as well as rigorous hypothesis testing of myths and perceptions may the best way to identify the most successful paths. And avoid serious mishaps and damage to society while we are at it.

The author Kartik Hosanagar’s upcoming book A HUMAN’S GUIDE TO MACHINE INTELLIGENCE falls into the latter category of sensible books. A professor at Wharton and a serial entrepreneur himself, the author shares many of his direct experiences taking on proliferating assumptions and hypotheses about the way popular machine learning algorithms work in consumer facing software apps, running them through the wringer of well-designed data-driven algorithmic experiments to call them out.

The book raises many of the issues other academics and researchers have covered ad nauseam in prior books and research papers. Where Kartik Hosanagar’s work scores significantly higher is a sincere effort by the author to bring these issues to the lay person – the one really at the receiving end of all consumer-targeted AI algorithms. And in a language that they can easily grasp their implications clearly.


The book covers its logical explorations in three parts.

The first part, useful for a first time reader on the subject, appears to draw examples from the same pool of own well publicized use-cases and popular apps most other books have done as well. How Pandora’s music recommendation algorithm works very differently from Spotify, and from How algorithms underlying online dating services make choices very differently. How Netflix recommends differently from Amazon Prime. How the Google Search engine and Autocomplete projects uncovered consumer insights that were never anticipated at first. DeepMind’s AlphaGo adventures. And the highly blogged use-case of Microsoft’s misadventures (let’s call them social experiments) with their chatbots. How what we perceive as intentional algorithmic-bias in many of our AI algorithms are really the result of incessant optimizations data scientists have to put algorithms through in order to deliver on their design goals. Steering low income and minority customers towards less-advantaged credit cards or loan packages. Serving higher paying executive career opportunities only to male applicants on job sites.

The explorations of “Free Will In An Algorithmic World” & “Law Of Unanticipated Consequences” are valuable contextual windows to appreciate the exciting stuff that come later in the book. For those interested, the author also shares a simple but effective primer into the core concepts behind machine learning, as well as a historical backdrop of technical developments leading up to where we stand today.


This part explores the design considerations that underlie development of expert systems in the past contrasting them with machine learning based considerations. The important trade-offs and compromises a data scientist must make in choosing the end uses he/she builds systems for. The important distinction that lies between building systems based on the existing knowledge repository of “experts”, never falling into the realm of “undiscovered knowledge”. Contrasting that with a clean-slate approach where a system learns from representing data given to it from multiple sources of prior knowledge, in the expectation that new knowledge can be uncovered by connecting disparate fields of knowledge.


The real gem of the book lies hidden in this second part. The author takes the reader through a fascinating series of examples from various fields, as well as social and algorithmic experiments he and other colleagues conducted as part of academic research at Wharton and various other institutions to test the various flaws attributed to AI algorithms today. The key takeaway: Do these flaws lie entirely within the algorithms? Does our own flaws in perceiving the world around us unconsciously influence the way we set performance expectations for our algorithms. Are algorithms the primary drivers of online echo-chambers? How do we quantify their role vis a vis other factors such as training data, and the data derived from users own interactions with those algorithms. The deep polarization of social content when it comes to political discussions. How do we get nudged into gravitating towards opinions that align more with our own? How Facebook’s News Feed algorithm chooses news for us thereby influencing our subconscious content choices? The same algorithm can both increase and decrease polarization depending on the underlying preferences of users.


An important source of insights about the role of machine intelligence and their regulation imperatives arrive in the last part of the book.
Our own behavior and perception has a role to play in determining our satisfaction and (often outrage) at the power algorithms wield over our lives today, and often influences the political zeal we exhibit in demanding regulations over algorithms.
Using several real examples, Kartik Hosanagar demonstrates the contradictory responses we exhibit when it comes to choosing one AI-based service over another. The responses originating from our own inherent biases when choosing a robo-advisor for investment portfolio management vs making hiring decisions vs driving an autonomous Tesla car.


Building on studies of these inconsistencies, the book proceeds to share a very effective conceptual framework to use when navigating the immense model-complexity and blackbox nature of algorithms, when thinking about decisions about regulation.
The question of what drives trust in algorithms. How do we as human respond when algorithms make mistakes as we observe their functions over a period of time. How do we reconcile the idea of algorithms with the possibility that they can fail, even while accepting that humans are fallible. Are there systematic patterns in the type of decisions we would entrust to algorithms, over humans?


‎Will letting people look inside the black box of algorithms necessarily take away people’s mistrust, hostility and fear? Is transparency the major factor in fostering trust? Research aimed at defining precisely how transparency can best enhance trust is still in its infancy.

There is a human tendency to anthropomorphize machines, and apply the same social rules and heuristics we employ in human interactions.

While newer machine learning algorithms are competent, it’s also important to remember the hype surrounding them often exceeds their actual abilities.