I walked past the books in the discount retailer I was visiting to purchase storage supplies for my wife’s home office. When she arrived at the section she needed, she noticed I was not with her. When I finally caught up with her, she noticed the book in my hands: Walter Isaacson’s 2014 The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution, unread, with a little shelf wear and the remainder marks that characterize much of my recently down-sized library.
As an instructional technology leader and professional whose office (until the campuses closed a few weeks ago) is the business studio, the title and subtitle piqued my several of my interests. It was the only purchase we made that day.
The book contains an interesting, if well-known, story. Ada, Countess of Lovelace through Jimmy Wales and the discoveries they made that resulted in the digital tools that are the backbone of so much of our economy and educational systems. Contrary to the title, the only innovator mentioned in any chapter title is Ada. Most chapters titles are the names of innovations in hardware, software, or connectivity and the chapter recounts the stories of individuals who contributed to the innovations.
The inescapable theme of a book on innovations that the supremacy of collaboration. Familiar to those who have been paying attention to thinking about innovation in recent decades. The stories in the book were well-told and well-researched. I was entertained by the book, but I was becoming disappointed there were to be no more important “take always” than “innovation requotes collaboration.” Then I found the final chapter; the second that has Ada’s name in the title.
In the final chapter, Isaacson drops us at the present (or at the present six years ago, which I believe is not much different from today’s present). At this point in the history of computing, it seems we can conclude:
- Artificial intelligence is about 20 years away from reaching the reality its greatest advocates predict. This 20-year window has been consistently that far in the future since its origins in the 1950’s. The reason for this is computers can easily do the things that are hard for humans, and the things easy for humans are hard for computers. This is sometimes called Moravec’s paradox.
- The most successful forms of intelligence that humans have discovered appears to be the symbiosis (although the biology student in me recoils at the use of the term) of humans and computers. Who wins chess tournaments? Not the computer programs. Not the grandmasters. The tournaments are won by players who integrate multiple computers into their play.
As I closed the book, I felt reassured. Humans do have a future. A wonderfully productive future. We do not have to wait 20 years for this future to arrive. What we need to do is get our public health crisis under control (which will include rebuilding health care systems and getting serious about… well… let me stop this rant and get back on topic).
Once we get back to sharing spaces with people and computers, we need to also get serious about understanding innovation. New way of doing familiar things. New ways of doing unfamiliar things. Doing what we didn’t know we needed to do.
This will require we jettison the old boundaries. Humanities faculty… you need to adopt technology. That is where data and human experience exists today. STEM faculty… you need to stop seeing your as information to be mastered… the knowledge one develops in your classrooms has no value if it is not used to solve human problems. This is the lesson we learn from the hackers, geniuses, and geeks who created the digital revolution.