The arrival of ChatGPT brought new questions to the forefront of our thought. Perhaps it is more accurate to observe it reminded us of questions we have been unable to answer; some of these have been unanswered for a long time. The hard problem of consciousness is one that I have been thinking about.
We have also reopened the problem of defining intelligence. It makes sense that if we are going to refer to these systems as “intelligent” then we must agree about what we mean, otherwise we may fool ourselves into thinking we have it when we don’t. A clear definition will help us quantify it, although we know that Goldhart’s Law will surely affect this, so as soon as we start measuring it, it will no longer be meaningful as we will improve the measurements, but not the performance of the systems. A clear definition will also allow qualitative understanding to differentiate, for example, artificial intelligence from natural intelligence (in its many forms–human cannot claim to be to only creatures that demonstrate intelligence).
As a fan of Stephen Jay Gould, I am very familiar with the dubious history of intelligence testing and the equally dubious attempts to measure this multifaceted aspect of humans as a single number. I highly recommend his book The Mismeasure of Man.
I’ve been listening to Stuart Russell’s 2019 book Human Compatible on my walks this week. His definition of intelligence (which I paraphrase) “the ability to affect the world to reach your goals based on your perceptions” Seems to be one of the best I have encountered. Consider the implications of this definition:
- It is inherently subjective–Intelligence and goals cannot be separated, so we only evaluate it in terms of goals.
- It is more diverse than commonly held–What you are trying to achieve defines you intelligence. As an educator I have both seen this in my students and seen how my colleagues rejected this conclusion.
- Adaptability is a key characteristic–Our goal change. Our perceptions of the relevant factors change. If our intelligence cannot be updated to reflect these new realities, then we must be less intelligent.
- It is utilitarian–Absent the context of one’s goals, there seems to be no intelligence. We may be able to identify some abilities that can be applied to lots of goals (ostensibly these form the general education curriculum that comprises school experience), but we must recognize the limited success we will have. We know far transfer is a great idea that just isn’t observed–teaching students to play chess hlpes them be better chess players, but they are unlikely to apply those strategies to other problems.
I am always skeptical of the goal setting process. We have lots of advice on how to do it, but many of the acronym-based definitions (SMART being the most frequently used in my circles) seem to ignore the fundamental reality that many of the goals we set we cannot achieve, or they depend on the actions of others. We all may want to be hired to be the president of the college, but that is decided by the board of trustees; we may do what we can to be prepared, but we have not control over the many factors that affect the position becoming vacant, the alignment of our skills to those thye believe they need, and the absence of bias that leads to someone’s friend being hired.
It seems that an intelligent agent that is prevented from reaching a goal could be judged less intelligent that one that does achieve its goals. Russell’s definition does give the intelligent agent an option, however. They can better understand the situation, update their goals, and go about reaching it. Being able to turn their own intelligence to their own situation seems a valuable character of intelligent agents; maybe the most important.