The Human and the Machine


The second annual Washington College Presidential Symposium tackled artificial intelligence (AI) and its role in our future (or demise).

Students appear on stage during the Presidential Symposium during the first panel on artificial intelligence.

The symposium was a tremendous success when measured by interest, attendance, and audience participation. Three panels of experts from the Washington community discussed AI’s impact on everything from poetry and art to job interviews conducted by bots to the end of humanity. The keynote speech, delivered by College board member Ryder Daniels ’90, addressed AI’s already enormous impact on business and society and what we need to do going forward. 

AI is a mainstay villain of science fiction, with cyborgs and androids turning against their human masters or supercomputers harnessing robots and other machines to take over the planet. In reality, AI has quietly entered everyday life and is already so ubiquitous as to be almost invisible. When you enter a query into a search engine, talk to Siri or Alexa, or turn on subtitles to binge-watch a show on Netflix, you’re interacting with a limited form of AI. In the medical and research fields, AI has been used to identify cancerous cells in scans more efficiently than doctors. It has been able to search through massive databases to identify drugs that may treat diseases and conditions they were not developed to treat. A little more worrying is the rapid development of generative AI—a type of AI that can learn, change, and evolve with use—especially large language models. These can write a researched and well-written essay on Shakespeare for a college senior or generate fake news stories and flood social media with comments that appear to be from real people on a scale that would dwarf the Russian interference in the 2016 Presidential election. 

All the panelists agreed that AI is evolving and advancing rapidly and is so large a subject it can’t easily be pigeonholed. The central themes of the symposium were to identify how AI is affecting specific spheres now, where it is leading us, and how rapidly and significantly it is about to alter our world.  

Student Experience and Opinions of AI 

poster for symposium showing butterfly landed on robotic handThe first panel consisted of students who edit and write for student publications, students doing projects in the IDEAWORKS Innovation Center, and Brian Palmer, the center's director. The editors and writers felt that large language generative AI models, such as ChatGPT and Google’s Bard, are already problematic for education because they can write persuasive and natural-sounding papers. Natalie Martinaitis ’25 said that these models can only successfully produce papers up to about 12 pages before they become obviously not created by a human. They agreed that the creative arts were still somewhat immune because these AI tools were not yet creative enough to come up with original thoughts and forms of expression. After some pushback from the audience, including examples of AI-generated art winning international prizes, the panel acknowledged that the creative arts may not be immune to AI deception for much longer. 

The panelists from the IDEAWORKS group took a very different tack. They have been experimenting with making the College’s electric-powered boat self-navigating. AI and the power of computers that can process enormous data sets in real-time and make critical judgments about objects and threats are invaluable. Their argument is that AI has enormous potential for good regarding the safety and efficiency of self-driving vehicles, whether in the water, on land, or in the air. 

Faculty and Staff Research on AI

The second panel consisted of faculty and staff who have researched the role of AI in art, education, and the destruction of humanity—more on the latter later. Benjamin Tilghman, associate professor of art history, argued that art created by an inspired individual is a relatively modern concept. In the Middle Ages, art was often a collaborative affair “prompted” by a wealthy patron who commissioned a work, honed or embellished by a middleman who gave more specific instructions to the artist or artists, and further developed by the artists themselves. Tilghman wondered if this process was radically different from prompts given by a person to an AI tool to produce a work of art. Raven Bishop, a director of technology in the library, spoke about using AI as an educational tool. She gave examples of how virtual reality (experiencing a wholly created world as if entering the set of a movie) and augmented reality (where created characters, scenes, or information can be projected into the real world, like the creature in Pokemon Go) can be used to give students more profound experiences of topics they are studying. 

Jordan Tirrell, a mathematics professor, was less sanguine about AI. He paraphrased AI researcher and philosopher Eliezer Yudkowsky’s quote, “AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Rather than AI producing a sentient and vengeful android or supercomputer hellbent on destroying humanity, Tirrell argued AI has the real potential to destroy humankind for more prosaic reasons. He said AI is really good at carrying out tasks and optimizing solutions. The danger is that without human control and an “off switch” to ensure control, AI could optimize an action indefinitely and to the detriment of the planet and its inhabitants. He gave the example of Mickey Mouse in Fantasia. Mickey acquires the magic hat and commands the brooms to pour buckets of water into the cauldron. The brooms continue carrying out the task even after the cauldron is full, and the place is flooded. 

Similarly, with AI, it will carry out a task without tiring. Of course, with a better command, the flood would be averted. The problem is, once you hand a task to AI to complete, can you be sure you have considered all the possibilities that something might go wrong and give it instructions to stop? The more complex the task or environment, the more chances there are for an error. And in the case of generative AI, the AI can figure out workarounds that may not be moral or what we intend. Tirrell concluded that with so many things that could go wrong, the more we depend on AI for tasks, the more likely we are to get unintended and even catastrophic results.  

The Impact of AI on Education and Careers 

The third panel discussed AI’s impact on students, teaching, and careers. Panelists discussed the difficulty of blanket or comprehensive rules for using AI in colleges. Something as ubiquitous and useful as autocorrect features in word processing apps, tools few had problems with regarding academic integrity, are evolving to make suggestions for better phrasing. Does that mean it is no longer the student’s writing? And how long will it be before generic word-processing apps give advice on paragraph order, essay structure, or even concepts? There are similarly significant questions about digital research with examples of AI-generated material citing fake stories or papers as factual.

A liberal arts education is the best education for an AI-driven future because the problem-solving, creative thinking, and communication skills it imbues in its graduates are precisely the skills needed to prompt and steer large generative AI models in the right direction. 

The panel pointed out that generative AI learns from the data it is fed. If an AI app was only given Shakespearean texts to learn about the world, it would not know that you can fly on airplanes between destinations, might not know that America exists, and would be wholly ignorant about the peoples and cultures of much of the world. Because large language models need enormous amounts of data to learn how to write, they often learn from the vast amount of data stored on the internet. Given how much dubious scholarship, fake news, and offensive opinions are shared on the internet, it’s not surprising that many of these systems give equal weight to conspiracy theories or racist beliefs as they do to better sources. And given the inequality of access to the internet, will these systems automatically be biased in favor of “first-world” morality and beliefs? Kyle Wilson, a professor of ethical data science, has a student working on teaching AI systems how to forget. It’s easy to get a system to forget a phone number and much more challenging to forget an entire novel. Yet, it would be important for AI systems to be able to forget incorrect information they have come across. 

In an interesting aside, Georgina Bliss from the College’s Center for Career Development spoke of the increased use of AI in the hiring realm. AI has been used for some time to evaluate letters of interest and resumes. Consequently, job seekers in the know have been using AI to write letters of interest and resumes specifically to appeal to AI readers. Not surprisingly, AI-generated letters do better with AI readers. Even more disturbing is the use of AI-proctored interviews. The interview takes place via a streaming app, and an AI system evaluates an applicant’s responses using not just the answers but how the person expresses themselves, their body language, eye movements, tone, and more. The question is whether this is helping pick the best candidates, ruling out suitable candidates for poor reasons, or creating a world where conformity of mannerisms and delivery are more important than ability and creativity. 

The keynote speaker, Ryder Daniels '90, an entrepreneur in data analytics, said that AI was already playing a significant part in our lives, was not going away, and should not be underestimated. He gave examples of how people have always feared the worst when new technologies arrived, and some of the fears expressed about AI were similar to ungrounded fears expressed about cars or TVs. He argued that AI was a potent tool that could radically change the world for the good, albeit that AI may replace some careers. However, he quickly pointed out that if left unchecked, AI poses a grave threat of disruption and even destruction to society. The speed of advances in technology and the processing speeds of computers make both threats and benefits challenging to assess and compare. Regardless, we can’t afford to bury our heads in the sand and hope it won’t affect us. 

He thought that AI needed to be regulated. However, he wasn’t confident that most governments were motivated enough or capable of doing anything legislatively. In his opinion, the best hope was that the European Union might set up a legislative framework for limiting AI, and the rest of the world might adopt these rules. He ended on a high note, saying that AI is only as powerful or dangerous as the prompts it is given. A liberal arts education is the best education for an AI-driven future because the problem-solving, creative thinking, and communication skills it imbues in its graduates are precisely the skills needed to prompt and steer large generative AI models in the right direction. 

Washington College President Mike Sosulski concluded the symposium by thanking the organizers for choosing such a fascinating and timely subject. He agreed with the panelists that the College was preparing its students for a future in which AI machines played an ever more significant role in everyday life.  


Welcome and Opening Remarks
Kiho Kim, Provost and Dean of Washington College

Washington College Students on AI
Moderator: Elizabeth O’Connor, Associate Professor of English
Liv Barry ’24, The Elm
Sophie Foster ’24, Collegian
Natalie Martinaitis ’25, The Washington College Review
Delaney Runge ’24 The Pegasus
Brian Palmer, IDEAWORKS Innovation Center, Director of Digital Media Services
Matthew Hutter ’25
Douglas Hewes ’26

Faculty and Staff Research on AI: “Artificiality, the Human, and the Machine”
Moderator: Karen Manna, Assistant Professor of French
“Art without an Artist: Medieval Parallels with AI Images,” Benjamin Tilghman, Associate Professor of Art History
“Could AI End the World?,” Jordan Tirrell, Assistant Professor of Mathematics
“New Realities: Exploring Virtual and Augmented Reality in Liberal Arts Instruction,” Raven Bishop, Assistant Director of Educational Technology, Library and Academic Technology

AI’s Impact on the College and Beyond
Moderator: Sean Meehan, Co-Director, Cromwell Center for Teaching and Learning
Academic Integrity: Heather Fabritze ‘25, Honor Board Chair
Information Literacy: Cori Lynn Arnold, Electronic Resources Librarian, Miller Library
Learning: Kyle Wilson, Allender Associate Professor of Ethical Data Science
Careers: Georgina Bliss, Assistant Director, Washington College Center for Career Development

AI and the Future of Work: Perspectives in Business and Beyond
Keynote Speaker: Ryder Daniels, Washington College Alumnus, Class of 1990 and member of the Board of Visitors and Governors
Moderator: Cori Crane, Associate Professor of German, University of Alabama

Concluding Remarks
Michael Sosulski, President of Washington College, Professor of German