6 Comments

Fascinating!

Expand full comment
Mar 19, 2023Liked by Lee R. Nackman

Thanks Lee, Weizenbaum had just written his book when I was an undergraduate at MIT, and at the time many faculty and students felt he was over-reacting. However, he had supporters as well, primarily from the humanities as well as the social and behavioral sciences who realized the dangers of over-attribution of intelligence to things. Also, agree with Sridhar that "responsible actors" using general purpose technologies for humanity-level good, not bad, purposes is key. To better understand one approach to taming today's large language models, this paper is instructive: Bai et al (2022) Constitutional AI: Harmlessness from AI Feedback URL: https://arxiv.org/abs/2212.08073 - search the paper for the word "Gandhi" for some especially interesting "prompt engineering" insights. In my short history of AI icons of progress, which includes Dartmouth conference, Deep Blue, Watson Jeopardy!, AlphaGo/Fold, etc. - I also include a few key papers, and just added "Constitutional AI" to that list.

Expand full comment
Mar 18, 2023Liked by Lee R. Nackman

Lee - nice walk down the 'AI' memory lane, starting with classic Eliza from Fiegenbaum's book. Without question, each wave of innovation has resulted in 'experts' over promising and under delivering - how ever the march goes on with AI becoming more and more pervasive in business after business (and of course to consumers. governments and nation states. The key question continues to be for each wave of innovation, tool and technology " Will we responsible humans, use the technology responsibly, ethically, kindly for the common good or will we go the other way. This is especially true for 'general purpose technologies' like Electricity, Software and now AI.

Sridhar

Expand full comment