Brenna is obsessed with thinking about AI in education these days, so she’s joined by the only other paranoid weirdo she knows to talk about it.
Show Notes
The tool in question today is ChatGPT. A key programming note: when you play with this tool, you are making it stronger and your data is being used for OpenAI’s profit (they are not, in fact, anything like “open). Brenna still thinks it’s important to understand the tool you’re trying to design learning around, but forewarned is forearmed, and you may choose to opt out.
Brenna talked briefly about next year’s Digital Detox — more on that soon. But you can catch up on last year’s Digital Detox in the meantime!
Brenna talked about Why They Can’t Write by John Warner, but you might also enjoy his recent “ChatGPT Can’t Kill Anything Worth Saving.”
Brian referenced Brenna’s open textbook project, Writing For Success: 1st Canadian H5P Edition.
Brenna referenced Cory Doctorow’s concept of the “shitty tech adoption curve,” which posits that if you want to know what horrible technology is coming to make your life worse, look at what is imposed on the most vulnerable people around you. He wrote a great short story to illustrate it called “Affordances.”
Brian referenced the obviously fair and even-handed “The College Essay Is Dead” from Atlantic Monthly.
Transcripts
Available for download as a .docx file here.
Available for download as a .pdf file here.
4 Comments
Add YoursThanks for the podcast. I tried out the bot, asking it to opine on how AI-generated discourse might, over time, impact the way people communicate generally. Here’s the response. It took, perhaps, 5 seconds to generate. Impressive and disturbing tech:
“It’s difficult to predict exactly how the inherent artificiality of AI-generated writing will affect communication as AI becomes more integrated into word processing programs and everyday usage. However, it’s possible that the use of AI-generated writing could lead to a decrease in the value placed on human-generated content and potentially even cause confusion or misunderstandings if the artificial nature of the writing is not clearly disclosed.
“One potential impact of AI-generated writing on communication is that it could lead to a decrease in the demand for human writers, as AI algorithms may be able to produce content more quickly and efficiently than humans. This could have negative economic consequences for human writers and could also potentially lead to a reduction in the diversity of voices and perspectives represented in written communication.
“On the other hand, AI-generated writing could also have positive effects on communication. For example, AI algorithms may be able to analyze and summarize large amounts of information more quickly and accurately than humans, which could be useful for synthesizing and communicating complex ideas. Additionally, AI-generated writing may be able to assist humans with tasks such as translation and transcription, which could facilitate communication across language barriers.
“Overall, the impact of AI-generated writing on communication will depend on how it is used and the extent to which it is integrated into everyday usage. It will be important to carefully consider the potential consequences of using AI-generated writing and to ensure that the artificial nature of the writing is clearly disclosed in order to avoid confusion or misunderstandings.”
I love this Will and also am so troubled. There is something about “human writers” as a turn of phrase that does sound a bit like I am soon to be eaten by my robot overlord.
Thank you. I really enjoyed this conversation. Similar to Brian’s experience with Spotify, I have found that the algorithm constricts rather than expands my musical listening experience over time, and I think ChatGPT ultimately might be doing the same thing. It results in an output that resembles a lesser version than our more complicated selves. Not that it isn’t impressive or can’t perform well for a designated task. I did some experimenting and found that it is great with writing organizational bafflegab like mission statements and strategic planning palaver. But isn’t a critical reader sceptical of that language to begin with? Like a lot of tech that makes its way into education, perhaps it will prompt us to further question our practices on issues like course content or assessment in a way that helps us realize that the straightest line between two points is not necessarily the journey we’re looking for as educators.
Writing specialist Ken Macrorie called the kind of writing produced by ChatGPT “Engfish”: writing typical of first-year students bludgeoned by teachers into the production of school-based, grammatically correct but tedious prose— a “feel-nothing, say-nothing language.” The five-paragraph essay has come back, like the ghost of Christmas past, to haunt us?