Creative English Teacher.com
Cart 0

Can Stories Save Humanity from the Dangers of Artificial Intelligence?

Teacher Life Teaching Ideas

Can Stories Save Humanity from AI?

By Zachary Hamby

Artificial Intelligence has arrived on the scene, and what was once science fiction has seemingly become science reality. While many are heralding the arrival of advanced A.I. with joy and wonder (teachers included), some are concerned about its rapid growth. What will the future hold if A.I.’s power to teach itself and expand its own boundaries increases? What unforeseen problems could arise from giving power over human life to something that does not have human values?

Conversations like this stir up images from The Terminator series and 2001: A Space Odyssey, where sentient A.I. threatens humanity. Some dismiss these warnings as alarmist while others insist we must be vigilant. Experts compare the arrival of intelligent A.I. to the discovery of fire or electricity. How will this change the world as we know it? Should this power be in the hands of entrepreneurial developers with little to no oversight? What can educators do to promote responsibility with this particular issue?

We need to be educated, and likewise, we need to educate our students. Rather than blindly accepting A.I. with a “Gee whiz! Isn’t that cool?” mentality (which I admit I have had at times), we have a duty to warn students of the potential dangers A.I. poses. Currently ChatGPT can write our lesson plans, but we do not want it to write our future.

To educate my students on this issue, I created an article analysis assignment. I began by curating a collection of seven articles that address potential problems with A.I.—with dates ranging from 2015 to present day. I embedded links to each of the articles in a Google Slides presentation, and as the students read each article, they responded to it in a separate document of my creation. Finally, we discussed their feelings on the future of A.I. and what safeguards (if any) they thought should be put in place to protect against potential problems related to A.I.  

This assignment fit nicely with the themes my seniors were already analyzing–the themes of scientific responsibility in Frankenstein and the abuse of power in 1984. As far as science and technology go, just because we can do something, does that mean we should? And since we know absolute power corrupts absolutely, should that power be placed in non-human hands? I even dubbed the activity “Fr-A.I.-nkenstein.” (I couldn’t help myself.) I also plan to use the activity with my sophomores, who will soon be reading Animal Farm

Here is a breakdown of the seven articles:  Click each title to make a copy in Google Drive.

Article 1:  “Will Artificial Intelligence Destroy Humanity? Here Are Five Reasons Not to Worry” The first article, written in 2015, reassures readers there is little to worry about:  A.I. is limited, it says, and does not pose a threat to society. (You will notice how this attitude changes in more recent articles.)
Article 2:  “What’s the Deal with Artificial Intelligence Killing Humans?”  This 2016 article has a great conversational style and addresses basic questions about what A.I. is and how science fiction works have portrayed it as hostile to humanity.

Article 3:  “A.I. Experts Are Increasingly Afraid of What They’re Creating” Jumping ahead six years, this 2022 analysis incorporates recent developments such as ChatGPT and presents a grimmer assessment of risks associated with A.I.
Article 4:  “Our Weird Robot Apocalypse:  How Paper Clips Could Bring About the End of the World” This article does a great job of explaining the thought experiment conducted by Nick Bostrom, wherein an overpowered A.I. system is given the simple task of producing as many paperclips as possible and ends up destroying the world.
Article 5:  “Researchers Say Humans Would Not Be Able to Control Superintelligent A.I.”  This article addresses the concern:  What if A.I. becomes too intelligent? And by the time we discover it, will it be too late?

The first five articles, taken as a whole, present many potential problems with A.I., but I also wanted to give my students possible solutions. These final two articles theorize safeguards that should be placed on A.I. These safeguards would include teaching A.I. human values.

Article 6:  “The Dangers of Not Aligning A.I. with Human Values”  This article stresses the importance of imparting human values on A.I. and gives examples of how not doing so could be disastrous.

Article 7:  “Using Stories to Teach Human Values to Artificial Agents”  How can A.I. learn human values? One set of researchers believes we must equip A.I. with the power to read and understand human stories. Our stories encapsulate human values and cultural norms. If A.I. can read and understand stories, these stories will teach it how to act and keep it from harming humanity. It’s an amazing idea. The program the scientists are developing to do so is called Scherezade, named after the protagonist of The Arabian Nights. As an English teacher, I always thought stories could save the world. Maybe I was right!

All of these materials are free to use in your own classroom, and I encourage you to do so. Add material to them as you see fit. This is an issue that needs to be analyzed and discussed by both young and old.

I cannot give my students the answers to all the questions that will arise with intelligent A.I.; I can only equip them with the power to think. The future is in their hands.

CLICK HERE TO MAKE A COPY OF THE PRESENTATION (INCLUDES LINKS TO PDFS OF ALL THE ARTICLES)

CLICK HERE TO COPY THE STUDENT NOTE SHEET

 

 

 


Older Post Newer Post


  • Tara on

    I absolutely love this! Thank you so much!! :-)


Leave a comment

Please note, comments must be approved before they are published