Are you ready for the era of AI-generated classwork?

Since its launch in December of 2022, ChatGPT and other learning language models like it have...

Since its launch in December of 2022, ChatGPT and other learning language models like it have upended our ideas of purely originated thought and classwork. This has given university faculty an entirely new challenge where the risk of AI-generated schoolwork - especially in essays and papers - is more present than ever.

The debate has raged since it has seeped into the fabric of university coursework: how exactly do colleges handle AI-generated work in assessing student progress? In this blog post, we explore the viewpoints of students and college faculty on the use of AI-generated coursework  in the higher education setting. 

This is a multi-faceted question to grapple with. One that has to include and value input from students, faculty, and even EdTech companies themselves. Together, they each will and should have a critical say in how AI is integrated into coursework and benchmarked into the grading process.

Some colleges have been well ahead of this process. While others are still taking a wait-and-see approach. We’re not here to say which is better for you, but knowing the questions to ask, how others are handling it, and what options you may have at your disposal are all important answers to know before embarking on a comprehensive strategy around AI-generated coursework.

How are students using AI?


The rate by which college students are using AI right now in their coursework is staggering given its infancy. Its early adoption rate means colleges  need to be more weary than ever about whether submitted assignments are heavily assisted or even completely written using AI-powered writing tools like ChatGPT.

A Best Colleges survey completed earlier in 2023 noted that a whopping 47% of college students are using AI software to complete the majority of their assignments. This includes those who use it to complete the entire assignment or utilize it for large portions of writing or research.

This same survey noted a couple of other interesting tidbits about the early state of AI tools in universities:

  • Only one-third of students surveyed noted that their school had a policy in place prohibiting the use of AI in their coursework.
  • Another 40% of students believe that using AI defeats the purpose of higher education.

It isn’t all worrying though. Several students noted in these surveys how AI actually augmented their writing process by pulling threads on their ideas in directions they hadn’t yet considered or confirmed the directions they were already planning. Students also can use AI to help clean up and improve their writing style when they don’t consider themselves gifted writers. 

Whether the university would consider these acceptable uses of AI continues to be a gray area for many right now. On one hand, it is vital for faculty to foster the critical thinking skills of their student body and ensure they leave higher education with an improved ability for problem-solving. AI can’t be a crutch for a unique and original point of view or approach to a specific problem.

However, AI today is still very much in its advent. The creators of ChatGPT and others are begging Congress for regulation and oversight of the technology to keep it from being too unwieldy. 

So how are students to necessarily judge the efficacy and pedagogy of using AI in their coursework? The simple answer is they’re not fully able to and need perspective from faculty to help shape the fit of AI into the higher learning model.

Like it or not, AI is here to stay. And the faster and more comprehensive colleges can allow students to use it while maintaining the integrity of higher education coursework, the less likely instances of outright cheating will become national media fodder for universities.

The sentiments of institution faculty on AI

The current tenor around AI in faculties across the university landscape appears to be mixed. Some see it as akin to the introduction of the calculator. Where that was originally feared to be the precursor of ruining math in college classrooms, invariably the technology greatly enhanced the student experience and helped augment their preparedness for the real world.

ChatGPT and similar tools reasonably could be looked at in a similar view. Right now there are myriad ways you can use the software in coursework. It is worth pondering how the tool will evolve and what mass adoption and use will look like five or even 10 years from now. 

If the goal of the university is to prepare the students as well as possible to function in the real world, then it stands to reason that getting them to harness AI responsibly is in the best interest of the student and the university.

This doesn’t mean outright plagiarism using AI should be acceptable.  There is a real concern from faculty about whether ChatGPT and similar software will become smarter and harder to detect in coursework.

But for faculty in colleges, you are either taking an optimistic or pessimistic view of AI in the student experience. It is inevitable that it will become more and more mainstream. 

The 47% using it right now have just been introduced to this technology: Consider the ramifications for the next generation of college students. The ones who will go through their entire secondary school experience with AI at their fingertips to help them with schoolwork.

These students will enter college with an intrinsic knowledge of how to use Learning Language Models. Should they expect their university to have a code of conduct to protect the integrity of higher education coursework while allowing for AI to be used as a tool, the same way you would expect students to use the campus library or a peer-reviewed site like JSTOR.

Teachers and students alike in secondary school are already using ChatGPT in the day-to-day. That will only increase over time.

The pessimistic view to take is to force critical thinking on the student by limiting access to AI or designing work in ways that make it useless as a tool. Ways to do that would be to introduce revised or new ways to assess student progress using mediums like:

  • In-class essays
  • More nuanced prompts
  • Verbal exams

And these don’t even account for the hands-on experience many in the life science fields have to undergo in the form of externships and the like.

The optimistic view sees AI as an emerging part of the student experience that is only going to grow over time. 

As such, universities should be working with students to adopt policies around AI: Establishing the limits by which AI should be used in coursework while creating clear boundaries and spelling out how faculty will detect and penalize work that is overly AI-generated. 

This is a fine line that will no doubt move over time. But as of right now, it feels like the latter option is the one that will be more in line with student expectations and provide the ability to elevate the profile of universities looking to attract the best and brightest to their programs.

Balancing AI with free thought

AI usage in classwork is not going to recede. We are truly at the advent of a new and uncertain era in higher education.

So how do universities think about AI policy as future classes of students begin to matriculate to campus?

Boston University represents a fantastic entry point. Students and faculty within the computer science program worked in unison to create an AI policy that balanced using tools like ChatGPT in the classroom while drawing distinct lines around where it crosses boundaries. They weigh non-AI assisted work higher than work that used AI as part of the process.

Students are likewise required to give learning language models credit when they use them and explain the purpose while highlighting specific sections. When students are overly reliant on AI in the work they create and turn in, the grading curve becomes severely punitive but falls short of citing plagiarism. This is an important distinction as the bar of what constitutes plagiarism is bound to change thanks to AI.

Whether this is the right balance of integrating AI tools like ChatGPT into higher education programs remains to be seen. 

But we are literally semesters into seeing how ChatGPT will affect work in higher education moving forward. Every tool and discussion should be on the table. 

It’s critical to include a broad stakeholder discussion to produce a fair and scalable policy that can stand the test of time.

CORE Higher Education Group

Author

Get started
See how CORE can work for your higher education program
Learn more about how CORE can meet your program’s unique needs and see the software in action when you book a demo with one of
our specialists.