Some Observations of Generative AI

Generative AI has been a “thing” for more than a year. My colleagues in education seem to be surviving despite the warnings that “the sky is falling.” Some are working to integrate it into their instruction; some are avoiding it. I was a student when handheld calculators became ubiquitous, I was a teacher when computers became ubiquitous, Wikipedia arrived, and most other digital technologies common in schools arrived. The pattern of predicting extreme effects, then slowly adopting it and adapting to it is familiar. we are seeing with generative AI is not unusual.

Actually, this is exactly what we would expect. Technology is socially constructed. The designers have their concept of the problem they are solving, but once it gets into the hands of users, they determine which problems it will solve, how the solutions are realized, and what is done with it that the designer never imagined.

While the pattern of how educators are responding to is familiar, other observations we make about how generative AI is affecting us are less familiar.

Consider the influences we will see on employment. Jobs have been displaced by technology for centuries, but when we are in the middle of the change, no one can predict what the future will look like. Generative AI will come for some of our jobs. It seems the best strategy for those who seek to stay employed is to develop and refine the skills that cannot be easily done by AI. What exactly those are and how they can be used to stay employed we cannot really tell.

Generative AI also is making us question what we mean by intelligence. This, of course, is not a new problem. For a few decades, philosophers have pondered what they call “the hard problem.” If we reject Descartes’ dualism and conclude that human (or any other species’) consciousness arises from the observable electrical activity in brains, then we need to elucidate a mechanism. Perhaps having these new models to explore, we can begin making progress on that problem.