On Khaneman and AI
Test Train Split, System I System II
Daniel Khaneman is one of the greatest social scientists of our modern era, his ideas have been fundamental to understanding the wheels and cogs that make up human intelligence. His (and Amon Tversky’s) two system model of the mind is one of the most impactful ideas in human cognition theory in behavioral economics. As hinted by modern AI practitioners the ideas that expand our understanding of human intelligence can provide insights on the present and future of artificial intelligence.
Fundamentally the two system theory states that the mind works in two different modes of operation. The system I is characterized by near automatic responses that do not require cognitive effort and probably represents about ninety percent of our thinking activity. System II is characterized by slow deliberate and effortful use of our brain to make decisions. A feature of our human brain is the ability to use deliberate practice of any skill in order to outsource the cognitive effort exerted by the system II onto the reflex based system I.
Issues arise when our system 1 thinks too fast and and therefore introduces errors in our thinking process. Biases such as the halo effect, availability bias, anchoring and a plethora of other experimentally observed biases have been explained by the two system model. Khanemans and Tversky’s theories expand models of cognition and are able to explain otherwise unexplainable anomalies in decision theory and behavioral economics.
Khaneman makes an important correction of not confusing the system I process in our mind to instincts. Instincts are behaviors that do not seem to need to be learned on the behalf of individuals. Once a skill has been acquired, such as riding a bicycle or cooking a meal, the system I mind takes control and conducts certain tasks as if they are completely natural to the individual, but it must be noted that this skill at some point was learned and integrated into the system I process by applying cognitive effort using the system II. Individuals that are good at what they do seem to do what they do instinctively, but this is fundamentally not a case of biological instinct. The only behavior that does seem instinctive is the ability to use our system II to learn skills that allow us to interact with the world using system I. Experiments that show the active learning process of babies clearly demonstrate that the ability to constantly interact with the world and learn from it is instinctive.
Although certain parallels exist between the two phases of training and inference of machine learning models with the two system model, neither Khaneman nor Fridman are convinced that current deep learning methods are actually the solution to creating general intelligence. It could be argued that the training of a machine learning model using gradient descent is the effortful process of learning to conduct a task in the way our system II learns to apply cognitive load in order to get neurons to wire and fire together. Subsequently it could be argued that the inference/predictions of a machine learning model corresponds to the indeliberate and effortless thinking our system I conduct.
While deep learning is successful in accurately predicting highly complex phenomena,it is not certain that these processes are analogous to real general human intelligence. Just because a model is accurate, it does not mean that it is correct. On the other hand, it is also very possible that with more data and more advanced methods (such as multi modal models) we will be able to generate highly intelligent systems that operate overall like a system II type of mechanism.
It seems relevant that AI researchers should apply further focus in drawing parallels between accurate theories of human intelligence to artificial intelligence in more meaningful ways. In the same way that neural networks have had immense success, fundamental new models of computation based on the dichotomous nature of intelligence might be very important to future streams of AI research.
Why did the human cross the road
Once you pass the threshold of absolute novice to beginning practitioner of ML/AI it might seem obvious that self-driving cars are a solvable problem. You already trained dozens of regression and classification models, so you naturally progressed to more complex tasks such as object detection or image segmentation. Maybe you took a more NLP path and focused on machine translation or text generation. No matter what direction you take, it is hard not to be mesmerized by the outstanding performance of deep learning. How hard can it be to train some data on some sensors and cameras to drive a car? In a controlled environment like a closed circuit, probably very doable; however, in the real world the problem is extremely difficult. Driving is not like other solved problems in AI, it happens in the real world with random fluctuations that don’t occur in a closed world scenario. Even the most complex computer games do not even start to compare with the amount of non-stationary volatility that exists in the real world. A large part of this volatility is caused by open world interaction with humans. The world in which we live is not turbulent because of nature, but it’s turbulent because of humans.
Fatal outcomes can occur if self-driving cars are not able to accurately anticipate human decisions, and it seems that the complex human decision making process is not yet accurately modeled by deep learning. Autonomous driving is a lot harder than people realize because we have to model the human mind with high levels of accuracy. We are trying to solve problems with deep learning, a technology we already know that today is not able to model for the complexity the problem requires. There are several examples of human machine interaction scenarios in which non trivial predictions about human behavior must be made by the machine. Examples of such include crossing the street or changing lanes. An interesting non-trivial fact about individuals crossing the street amidst traffic is that humans tend to communicate their commitment to crossing the street is by a short observation of street conditions followed by a swift and committed walk across.
Just the mere act of knowing if a human is going to cross the road or not requires the prediction or anticipation of human action. This anticipation can be predicted with a model, but we know that the model is fundamentally wrong and does anticipate the action accurately from a scientific realism** point of view.
AI researchers can create billion scale datasets and provide outstanding evaluation metrics of performance of certain tasks that are involved in self-driving, but they can’t confidently say that their models accurately represent the prediction of certain human behavior.
Just as we understand why airplanes fly, and that’s how we make them safe, as a society we can’t expect to make self-driving safe without understanding how humans will interact with other humans or machines on the road.
In order to accurately model human decision making using up to date models such as those devised by Khaneman (and others) it seems that we need to further integrate the ideas of behavior economics and psychology into AI. This would allow us to better predict human decision making with models that we can observe to be more accurate in their predictive power. It must be noted that while Khaneman’s models are accurate in a scientific instrumentalist* point of view they are not from a scientific realism point of view, however this should not stop us from using them. “All models are bad, but some are useful”.
-
*Scientific Instrumentalism is the philosophy that the role of science is to create useful knowledge for humans to prosper.
-
**Scientific Realism is the philosophy that the role of science is to understand the true nature of reality.