Summer of AI 2025: Highlights

The Summer of AI 2025 summer-long series began with a Kick-off and Showcase event in May, which faculty and staff attended either online or in person to learn more about how artificial intelligence (AI) is shaping the future of education and beyond.  Throughout June and July, the series offered a full calendar that listed a total of 38 hybrid events that included 11 hands-on workshops by CITL, 16 guest presentations led by campus partners, 3 panel discussions led by students and faculty, and 8 AI consulting sessions facilitated by CITL. The Summer of AI was indeed a success!

Summer of AI 2025 - Showcase

Since summer is usually a busy time for many, we recorded several of Summer of AI sessions and are highlighting the greatest viewed with hope you will find something of interest. 

Professor Bob Morrissey, Augmented Historian:  Applied AI in Historical Research 

Professor Bob Morrissey acknowledged the concerns of history faculty regarding generative AI, from hallucinations to less-than-scholarly outputs.  Professor Morrissey, however, was eager to share how he uses AI as a “research assistant” to data mine historical sources: 

The key insight that I want to share is that AI is a shortcut to datafying historical sources--quickly turning unstructured texts into structured data ready for powerful analysis.  For many types of quantitative and digital analysis [AI] can speed the workflow considerably.  For learning, for gleaning, and organizing and digesting information and texts, I believe it is a super useful tool. 

Professor Morrissey shared how he prompted ChatGPT to convert texts documenting George Rogers Clark’s journey through Illinois during the American Revolutionary War into a timeline with geographical locations, accelerating his research methodology as a social science historian: 

My own practice and workflows begin with a key intuition:  GPT becomes truly transformative for research when you use the training data . . . as a tool for analyzing and transforming primary and secondary sources.  This is where those metaphors about AI as research assistant make a lot of intuitive sense. To me, this research assistant reads really fast. This research assistant does not get bored. This research assistant knows a tremendous amount of coding.  

Professor Morrissey’s use case is applicable to any discipline where researchers would like to enhance and accelerate their analysis of large data sets.  We encourage you to review Professor Morrissey’s presentation

Chuck Geigner, Chasing the Fantastical Promise of AI 

In a thoughtful presentation about data security in the age of AI, Chuck Geigner, Director of Information Security, made it clear that “Nobody gets our foil Charizard card for cheap!”  In this playful analogy, AI becomes what all the cool kids are doing now and the Pokémon Chiarzard card becomes the valuable thing (our personal and institutional data) a kid has to surrender to join the cool kids.  This is important because AI is built on giving it data, and lots of it. 

In a humorous and self-deprecating manner, Chuck noted his hope that we critically consider AI and think carefully about our data stewardship without stopping or slowing down our institutional commitments to research and innovation.  The important question to keep in mind:  “What is AI going to do for us?”  And what important information are we willing to trade (our foil Charizard cards):  student data, financial, research, HR, private/sensitive/high risk, or institutional records.  Some methods for limiting risk around AI use include developing at the local level and through sandboxed instances, or through carefully crafted legal agreements with technology company partners.   

AI-Resistant or AI-Enhanced Assignments?  Practical Strategies for Faculty 

Adam King, Director of Innovation and Transformation, GIES College of Business 

Jamie Nelson Associate Director Educational Technologies at Gies College of Business and Associate Director, Educational Innovation at CITL 

Committed to supporting all faculty in their teaching at Illinois, Jamie Nelson and Adam King offer a wide spectrum of approaches for dealing with our post-AI world and teaching environment.  Ranging from those instructors who would like to effectively resist AI use in their course to those who would productively incorporate it, Adam and Jamie begin with the suggestion that all faculty should run course assignments and assessments through an AI tool in order to see how it responds.  For Adam, instructors might be “amazed at what AI can do” with this assignment testing in AI becoming “the first step to realizing I might need to change my assignments.” 

While neither Adam nor Jamie believe we are in an “AI apocalypse” both strongly feel we are in a “post-AI environment” where instructors should consider “How will you adapt your teaching and assessment strategy?”  In this new environment, even something as tried-and-true as Blooms Taxonomy should be revaluated, with, perhaps, the assistance of these new frameworks endorsed by Adam and Jamie: 

For Adam and Jamie, there are three primary modalities instructors can adopt as they design and teach their course in a “post-AI world”:  AI Proof, AI Resistant, and AI Enhanced.  AI proof assignments range from proctored exams to in-class work and oral exams.  AI Resistant assignments do not outlaw AI, but ask students to rely more on their own perspectives, local contexts, and peer discussion and team work.  AI Enhanced assignments intentionally encourage students to use AI tools to enhance their learning, to develop digital literacy skills, to build AI awareness and talent, and to enhance critical thinking.   

In general, Adam and Jamie encourage instructors to build trust with their students and avoid policing cheating to the point that it creates an adversarial relationship with students.  At many points, instructors are encouraged to have conversations with students and to be transparent about working through course strategies regarding use of AI with their students.  Thinking about the range of stances an instructor or course might have toward AI is a valuable consideration.  Interestingly, any instructor, course or even a major course project might incorporate a range of assessment strategies:  from AI Proof to AI Enhanced. 

Summer of AI 2025 Session
Summer of AI 2025 Attendees

Talking AI:  Hope, Hesitations, and Higher Ed Panel Discussion 

Our Summer of AI panel discussions encouraged rich conversations; so much so, that we wanted to go back and spotlight some of the major take-aways and intriguing digressions that developed out of these sessions.  Our title for this panel turned out to be very apt, with a number of AI hesitations and hopes brought to the foreground.  One significant hesitation was the sense that higher education and society are not keeping pace with generative AI, especially in our assessment methods.  For Jake Metz (Assistant Director of Infrastructure, IMMERSE:  Center for Immersive Computing; Siebel School of Computing and Data Science).

There is a downfall in our assessment systems [which are] easily hacked by large language models now, right?  So, I guess a negative impact could be that our assessment metrics and the way we teach people and the way we understand what they've learned may not be able to adapt all that quickly, and there's a danger that there's a current crop of students that's in the in-between, right? They're in this moment where we have not adapted our teaching practices, and I'm worried about what happens as those people go forward. 

For Miecen Sun (Assistant Professor; School of Information Sciences) the answer to students cheating with AI is not to use AI to police students:   

And I think it's just grossly irresponsible for the teacher or educator to use these detection tools to punish students just because it makes their jobs easier.  I think they should instead be having more facetime with the students. They should be having more direct critical thinking, encouragement, and mentoring with students, but of course that takes time. And I think because now we know there are huge false positives and negatives, which also has its own ethical implications on top of this kind of pedagogical laziness at its core.  

This crisis in academic assessments has led to a return to old school methods, from in-class writing (blue books) to oral exams and performance-based evaluations.  For Clay Shirky, N.Y.U. Vice Provost, this transformation back to older-style assessments is broad and picking up momentum: 

Learning is a change in long-term memory; that’s the biological correlate of what we do in the classroom. Now that most mental effort tied to writing is optional, we need new ways to require the work necessary for learning. That means moving away from take-home assignments and essays and toward in-class blue book essays, oral examinations, required office hours and other assessments that call on students to demonstrate knowledge in real time. The shift is already happening: The Wall Street Journal reported on booming sales of blue books last school year.1 

For Michael Curtin (Innovation Coordinator, Office of the CIO, Technology Services), generative AI has made it difficult to know exactly how to talk about AI with students, especially since instructors and students seem to live in slightly different worlds regarding AI: 

The frontline is in the classroom, in my opinion, because you have a tremendous amount of frustration on the part of students who are seeing the world as they understood it, yanked out from underneath them at a very tender time.  . . . They don't have security, they don't have career momentum, so they are starting out in a very gray area.  And so I have to be incredibly delicate about how I even talk about generative AI in the classroom, because some view it as an existential threat, and others view it as a necessity which they're going to need to embrace in order to survive in any job, in any field.   

The primary AI hesitation the panel discussed was the potential for AI to undercut the development of skills in students, from writing, to problem-solving, to creation and creativity.  The danger of students pressing the “easy button” when confronted with something hard that AI can do quickly, was the major threat facing instructors.  Miecen Sun noted that every one of us at times will choose to take the quicker, easier path: 

That is what makes things really complicated, because where do you draw the line? And we are all humans, and we've all been students, and therefore, we all know too well that the tendency there is, because learning is hard, because thinking is hard, because creative, complex thinking is always the hardest thing, even though it's eventually rewarding, if you do the hard work.  But then at the get-go, the human tendency is always to relegate it to later, to relegate it to someone else, and now we have that someone else 24-7.  

The panel found hope in several aspects of generative AI, from the obvious advantages of AI helping with mundane and repetitive tasks, to its ability to level the playing field for learners and scholars for whom English is a second language.  Professors Miecen Sun and Mike Yao both spoke of the “confidence boosting” that AI could provide to students and faculty who were continuing to develop their English language skills.   

The panel explored a number of intriguing digressions worth noting.  Jake Metz emphasized the value of generative AI’s power to summarize and utilize massive amounts of information: 

We are reaching this point that we're in kind of the intellectual economy [where] there are ever more people participating, and there are ever more information and papers and studies being published, such that we reach a point where it is physically impossible for any human to actually read all of this, right?  And then, at some point, you reach a level where, well, you're never going to be able to read any more than you already can with the amount of time you have. And there's increasingly, exponentially more knowledge and information that you can access and leverage.  AI is a tool that's coming about at a time where it is also necessary to be able to make sense of the information overload that our world has created 

For Michael Curtain, reflecting back just 5 to 10 years ago on the skills that were naturally acquired with college education and a professional work life, and realizing that contemporary students now could bypass the development of those skills, raises significant educational and societal questions: 

Many of the skills we had to use in a normal life before generative AI came along were just ambient. Of course, you know how to write a letter. Of course, you know how to communicate effectively with text, because that's what you have to do to get almost anything done in school or in work.  Those are now becoming special, and this shift that generative AI has introduced into the workforce and education is allowing some people to say, this is something that I never even really understood. It was like air.  Suddenly, it's precious. Suddenly, I'm going to, with intentionality, invest my time to get good at something that I know that generative AI could just do for me. I could use an easy button, I've done it before, maybe I've tried it, and I understand the value that going through the hard work, making all of those granular decisions.  And it didn't used to be like that, because you just had to do stuff yourself. There was no one that was going to show up and save you, or help you, or smooth over the rough parts. Now, because that's possible, some people actually see the value of investing in themselves. 

Building on the potential of generative AI to prod higher education to reinvent itself, Mike Yao (Professor of Digital Media and Business Administration) painted a future where faculty and students would work together more closely as they collectively navigated the historic changes of generative AI: 

I think it's a moment to really rethink what we mean by education, and learning--in instruction and mentoring, and advising.  Many of us don't know what's ahead of us. We're walking that path, too.  And I would think that being able to convey and make the students see the responsibility is not so much we hold onto a set of values that we believe to be true, but really, explore that with the students, with the learners, to kind of help them discover, say, here is a whole new map.  That we need to kind of walk around and explore together, we happen to be a little bit more experienced.   

We encourage you to review this excellent panel discussion and to seek out these campus colleagues for further discussions. 

Mary Ton Digital Humanities Librarian, AI From a Digital Humanities Perspective 

In her Summer of AI presentation, Mary Ton encouraged instructors to adopt a model of transparency and trust in working with students on the topic of AI. Mary asked: “How does your stance towards AI build community with your students?”  She reminded that “your classroom environment and your disciplinary norms impact your relationship with your students.”  Further, by focusing student work and assessment on process and reflection on academic methods and values, the instructor can provide “a little bit more grace for the times where the outcome isn't spectacular, but the process is doing the work that you're expecting and demonstrating the skills that you want to see.”   

Given how fast AI is moving and that there is no one-size-fits-all course policy, Mary suggests instructors create assignments and assessments that “give students a little bit more freedom to fail, to fail productively and to understand and apply the kinds of skills that you want them to take away.” 

Like all good librarians, Mary singled out some excellent resources, including the MLA-CCCC Task Force white paper on Writing and AI, which emphasizes that instructors avoid punitive AI policies and AI detection software, in favor of collaborative reflection and conversation with students regarding AI.  The other resources are from Mary and our own campus librarians:  an Introduction to Generative AI libguide and the AI in This Course:  a Canvas Module from the University Library, available to Illinois instructors through the Canvas Commons. 

CITL truly thank every speaker, host, and panelist for participating in the Summer of AI 2025. Each session was filled with great panel discussions, AI Insights, and timely resources that faculty can use to enhance courses. You can view more Summer of AI recorded sessions on the Summer of AI video playlist.