Ohio State nav bar

Get Familiar with AI at Ohio State

What Is Generative AI and How Does It Work?  

Generative Artificial Intelligence is fundamentally a very large computer program that produces verbal and visual materials (“media”) that simulate the kinds of verbal and visual communications that people produce. Based on intensely complex analyses of existing media, Generative AI programs compile a probabilistic model of human communication patterns, primarily based on the likelihood that one word or segment of an image will follow another word or segment. When given a prompt, that model is used to predict what humans would produce if provided with the same prompt.

Since Generative AI has recently become more widely available for use by the general public, it is important to understand what Generative AI can and cannot do so that you can make decisions about how it can best be used and when use should be avoided in the teaching and learning context.

What Can Generative AI Do? 

The full range of features that AI can perform -- and the level at which it can perform them -- remains a subject of intense research. New versions of AI and new approaches to prompting AI continue to reveal new abilities. In brief, it seems likely that for any kind of digital media that humans can produce, AI can generate some approximation. Some of the key functions that have received most attention and seem most relevant for education include: 

With Text

  • Generating human-readable text: As widely advertised, Generative AI can produce sequences of words very similar to the texts humans might write in response to a similar prompt. 
  • Modifying and revising text: Given an existing text, AI can generate grammatical, stylistic, and other suggestions for revision based on its perceptions of language norms. 
  • Transforming and translating text: Given a text in one language or style, AI can approximate what a speaker of another language or someone skilled in the alternative style might produce. 
  • Speech to text & text to audio approximating speech: Provided with audio of someone speaking (such as the audio track of a video), Generative AI can produce a transcript. Conversely, given a script, or prompted to generate one, Generative AI can produce audio that sounds like human speech. 
  • Conversations: Unlike search engines, AI can recall parameters from one prompt and apply them to later prompts, creating the structure of a conversation. 

With Images

  • Generating human-perceivable images: AI can produce a digital image in response to any description of a visible object. 
  • Modifying images: Given an existing digital image, AI can produce a derivative image that differs from the original as directed, such as removing specific people or components. 

What Can Generative AI Not Do? 

The precise contour of the limits (what the technology is unable to do) and limitations (what developers have restricted the technology from doing) of Generative AI continue to be explored. There are rich conversations among AI users sharing and explaining challenges they have encountered and attempting to understand what these experiences reveal.

Some of the key limits and limitations that have currently been identified include: 

  • Truth: The text and images that Generative AI produces may or may not align with the world or other human-generated text or images. AI predicts what people would produce, but it cannot predict whether those communications would be deemed true. This is most evident when AI is presented with mathematical prompts: it often generates responses that follow the form of mathematical thinking but includes numerical quantities that do not make sense. Similarly, Generative AI sometimes produces citations to articles that do not exist. 
  • Current Knowledge: The text and images produced by AI are limited to derivations of the material it has been trained on. Given the expense of training large models, updates are infrequent. Thus, for example, if given a hyperlink and asked to summarize an article, AI is not able to produce its summary based on the actual text of the article, instead producing what its model indicates a person would produce given the same prompt. 
  • Hallucinations: The models that Generative AI develops are imperfect – the rules it derives do not always correspond to our own observations – and so the text and images it produces may include patterns that are not possible in the non-digital world. This is most evident with images of people or architectural features not possible in physical space. 
  • Consistency: Generative AI may or may not generate the same output given the same prompt. This variation in responses is not predictable and seems to vary based on a wide set of variables, such as the identity of the person who enters the prompt, the configuration of devices used to interact with the AI, and the collective impact of all the other users who have or are currently interacting with the AI. 
  • Bias: Generative AI is prone to both historical and algorithmic biases. For the former, AI is trained on existing text and images. Whatever biases are present in those datasets (racial, gender, etc.) will be reflected to some degree in the models the AI derives. In addition, there is algorithmic bias, since the developers necessarily must design a process by which AI seeks and responds to patterns in the text and images it is given. 
  • Detectability: There is not currently a systematic way to determine whether a given text or image was produced by AI or by a person. Sometimes there are features that make the AI-production of an object seem obvious (such as hallucinations or text where the AI self-identifies); however, attempts to design tools that can make this determination reliably have not yet succeeded. See the article "Coping with AI Advancements and Availability" for more detail about this limit. 

What Generative AI tools are available at Ohio State?

Ohio State has taken a cautious approach to adopting generative AI tools, primarily due to security and privacy concerns. Currently, there are a few options that you can explore as university-sanctioned options for investigating and incorporating generative AI in your class: Microsoft Copilot chatbot and Adobe Express AI image generator. 

Copilot is a chatbot developed by Microsoft that functions very similarly to the popular ChatGPT. Copilot is built upon OpenAI's GPT-4 foundational large language model, which in turn has been fine-tuned using both supervised and reinforcement learning techniques. Copilot can understand and communicate in many languages and dialects and can also recognize and produce images. 

For examples of how you can incorporate the Microsoft Copilot chatbot in your course, see the section below, “Ideas for incorporating student use of generative AI.” 

Tutorial 

ASC ODE has developed a tutorial that introduces how to access and use the Microsoft Copilot chatbot. This tutorial is intended for faculty, staff, and students. Explore the tutorial below and fill out the linked request form to receive a link and embed code to include in your Carmen course site.

 

Student Resource Tutorials Request Form

Another university-approved tool that supports generative AI and is available for use by faculty, staff, and students is the Adobe Express AI Image Generator. Users describe the images they wish to generate using text to image prompts and within seconds, are given several AI generated options from which to choose.  

For examples of how you can incorporate the Adobe Express AI Image Generator in your course, see the section below, “Ideas for incorporating student use of generative AI.” 

Tutorial 

ASC ODE has developed a tutorial that introduces how to access and use the Adobe Express AI Image Generator. This tutorial is intended for faculty, staff, and students. Explore the tutorial below and fill out the linked request form to receive a link and embed code to include in your Carmen course site.

 

Student Resource Tutorials Request Form

Approaching Other Generative AI Solutions

As you consider how the availability of Gen AI impacts your discipline and teaching, it is important to bear in mind the university's commitment to maintaining our standards for digital accessibility, and protecting faculty, staff, student, and institutional data. If you encounter generative AI tools and solutions you are interested in exploring for research, teaching and learning, or another purpose, start a conversation with support professionals in ASC to guide you through the process of determining how this tool might be approved for usage. ASC ODE helps instructors with tools for teaching and learning, and we can direct you to the right resources for support in other areas. For  more information about what the tech tool adoption process entails, see this guide created by ASC ODE.

We encourage you to also consult university guidance and policy about click-through agreements for unsupported tools at Ohio State. This resource, as well as one for cloud computing guidelines, details how to approach the use of applications and services available on the web as a member of the Ohio State community, even in the exploration stage. As you navigate compliance with these policies, if you encounter questions or concerns, we encourage you to reach out to ASC ODE for a brief consultation. We support you in reaching your pedagogical goals and are here to help with that process. 

How and why are instructors and students learning with Generative AI? 

As a follow-up study to the broader Time for Class Spring 2023 report, Tyton Partners administered a pulse survey in the Fall of 2023 that was completed by approximately 2,600 post-secondary faculty and students. The survey reinforces the goal of monitoring digital learning in higher education, focusing specifically on Generative AI (Gen AI) writing tools and how they are being used. The study ultimately found that, while faculty usage of Gen AI increased significantly between March 2023 and September 2023, student usage continues to outpace that of faculty at nearly twice the rate (49% of students compared to 22% of faculty). This gap in usage coincides with existing tensions between faculty and students’ perceptions about Gen AI’s ability to positively impact learning and the level at which Gen AI usage is acceptable.

As multiple articles suggest (see "Students: AI Is Part of Your World" and "Integrating Generative AI"), this is significant because use of Gen AI is likely to continue to grow and impact the future of work, among many other facets of society. Gen AI already is and continues to be incorporated into products that we all use daily, so much so that as Charles Hodges and Ceren Ocak suggest, “Generative AI integration may become so ubiquitous so quickly that students may not even realize the tools they use incorporate it.” (see “Integrating Generative AI”).

In service of preparing our students to be leaders and engaged citizens, many educators at Ohio State believe they have an obligation to support the development of digital literacy skills that help students navigate the changing nature of these technologies as well as their social and ethical implications. Key steps in this pursuit are to understand how and why students are using Gen AI in their learning and to become familiar with the technology itself. From this place of understanding, educators can implement strategies that help to regulate its use effectively.

Student Usage of Generative AI

In order to support the development of students’ Gen AI literacy, start by exploring the most common ways that students are currently using these tools. Among the top ten student use cases of Gen AI according to the Time for Class report are the following:

  • Summarizing and paraphrasing text 
  • Understanding difficult concepts 
  • Assisting with writing assignments 
  • Generating practice materials for studying
  • Translating text into another language 
  • Analyzing and interpreting data 
  • Organizing schedules

In most cases, students are employing Gen AI to improve their own workflows and efficiency so that they can focus their efforts on the deeper aspects of learning. With a diverse student body that faces an increasing number of competing priorities – from academic to financial to family obligations – these tools provide an opportunity for learners to create customized learning experiences that can better support their cognitive functions. Like content and curriculum developers who seek to support the cognitive dimension of learning by creating “useful” learning experiences, students are employing Gen AI in much the same way to help them better process information by avoiding cognitive overload, reducing extraneous processing through the minimization of distractions, managing essential processing to help them understand new material, and fostering generative processing by constructing models or schemas (see User Experience Design for Learning: Useful).

While this research implies that students are primarily using Gen AI to improve their overall learning and time management skills, there are certainly instances in which students employ these tools in ways that could be detrimental to their education. Several strategies that instructors can incorporate to discourage such usages are described below.

Instructor Usage of Generative AI 

In addition to understanding how and why students are employing Gen AI tools to assist in their education, becoming familiar with the tools themselves can further aid in an instructor’s ability to support these crucial digital literacy skills. One way to gain familiarity with the tools available at Ohio State is to practice with them first-hand in much the same way that students are, using an iterative process and some prompt engineering strategies to achieve the best results.

Below are a few activities you might experiment with to see how Gen AI can help to enhance your own workflows when it comes to curriculum development and assessment design. 

To begin to experiment with this task consider starting with a detailed description of the course.  

  • Using Microsoft Copilot, describe the types of skills and ideas you want students to take away from the course, perhaps any important learning materials or topics you think might be included, and any other pertinent, specific information that will help the tool to better understand the conceptual nature of the course.  
  • Review the initial output and then utilizing the same topic thread in Copilot continue to add additional details to help refine the original generated outcomes. For example, you might return to the prompt and ask Copilot to take on a specific role, such as that of an educator in higher education or ask it to incorporate a specific dimension from Bloom’s Taxonomy. 

In utilizing Gen AI to draft an assessment, consider employing the TRACI model for prompt engineering, including details about the Task, Role, Audience, Create (referring to the format/medium), and Intent as outlined below.  

  • Using Microsoft Copilot, start by describing the task and clarifying the type of assessment you would like the tool to generate (e.g. a low-stakes practice assignment containing multiple-choice style questions, a scaffolded project consisting of multiple steps to be completed over time, a summative exam, etc.). 
  • Then, explain the role of the person conducting the assessment (e.g. an instructor of an intermediate biology course at a research university, an instructor of a post-secondary introductory geology course, etc.) 
  • Next, identify the audience that will be completing the assessment. What level of knowledge do the students have? At what point are they in the term? Are they all majors or minors in the field or do they come from diverse areas of study and experiences? Etc. 
  • Continue to add additional parameters and create instructions for how long the assessment should be or the amount of time it should take, as well as any other important formatting details you might wish to include. 
  • Finally, explain the intent of the assessment by including the learning goals and outcomes that will be measured by the assessment and, as with any generative AI prompt, continue to add details or refine your prompts to move closer to the desired outcome. 

Following a framework that prioritizes the UXDL principle of creating desirable user experiences, instructors can lean on Gen AI to develop illustrative and relevant visuals that complement learning materials but that otherwise might be difficult to find.  

  • Using the Adobe Express AI Text to Image Generator start with a concept, character, or location (real or fictitious) central to the topic or theme of the learning materials that you are developing. Provide as much detail as possible and experiment with different content types (photo, graphic, or art) and styles (such as kitschy, pixel art, fisheye, etc.). 
  • Choose one of the resulting outputs or select “load more” to view additional options and then experiment with changing styles and generating new options, or with altering your prompt slightly to include even more detail about the setting or characters that may need to be included. 
  • Once you have identified an image you would like to use, download the JPG image(s) to your device and upload them to your presentation slides or to the pages of your Carmen course.

How to regulate AI usage? 

As you consider how Gen AI use affects your classes, you will likely identify several moments in which potential student usage of Gen AI crosses the boundary of useful efficiency and interesting exploration, and instead disrupts the learning process. Thus, there may be instances in which you need to dissuade learners from relying on Gen AI to complete a task, (e.g. a writing assignment that demonstrates close reading and interpretive skills). As AI tools evolve, you may likewise encounter new opportunities in which permitting or encouraging student use of Gen AI could enable important conversations surrounding the technology and its societal impacts or serve as a tool that supports student achievement of course objectives.

Below are a few strategies and ideas that can be implemented to address both the disadvantages and advantages of Gen AI usage in your classes. 

Immediate strategies to discourage student use of generative AI 

Dr. Mary-Ann Winkelmes’s research has demonstrated that providing greater assignment transparency results in increases in areas that are established predictors of student success. Key among them: their academic confidence, sense of belonging, and awareness of improved capabilities in employer-valued skills. Utilizing the Transparency in Learning and Teaching (TILT) framework, transparent assignments communicate clearly to students about the purpose, the task, and the criteria you will use to evaluate it before they begin the work. 

  • Define the purpose of the assessment to highlight how the skills they develop as a result will benefit them beyond the classroom

  • Be clear and explicit about the task you are asking students to perform in order to support their self-efficacy and avoid demotivation. 

  • Define the characteristics by which their work will be assessed. Consider utilizing a rubric that provides clear examples of how excellent work differs from adequate work. 

Along the lines of transparency described above, providing a clearly defined and stated AI policy for the course and restating that policy for individual assignments can help students navigate the complexities of AI technology and avoid any unnecessary anxieties about what may or may not be acceptable. (For more on this topic see ASC ODE’s article on Crafting Policy for Student Use of Artificial Intelligence and the TLRC’s resource AI Considerations for Teaching and Learning)

When asking students to discuss or reflect on a specific topic or idea, ask them to incorporate and make connections to a specific object, moment, or experience that they have already encountered in the course or from other elements of campus life. Gen AI, at least at present, will struggle to produce meaningful outputs when confronted with content and descriptions that do not appear in open-access sources. 

If you are concerned about students utilizing Gen AI to respond to an assignment prompt, test that prompt out for yourself in Microsoft Copilot to see how well or poorly these tools respond. If the output from Copilot is acceptable, you may need to adjust your prompts to include more specificity, as defined in the bullet point above. 

Here you might consider adding a reflective step to the assignment that asks students to explain their learning process, how they moved from point A to point B. If they are tasked with completing a mathematical equation, for example, ask them to describe their thought process and the steps they took to solve the problem. For larger assignments and projects, consider scaffolding components of the assignment so that they build upon one another, forcing students to make connections between each step as they go along.  

Ideas for incorporating student use of generative AI 

Ask students to take sections of a difficult or technically dense text being read in class and use Copilot to summarize or reword to help with understanding. Have students reflect on what they learned as a part of this process and what their experiences of doing this illuminated. 

Ask students to use Copilot to help generate coding language snippets to use in a design and development assignment, then test the code and ask Copilot to help navigate problems that they encounter. Build this as an iterative assignment where students can return to Copilot for multiple steps of this process. 

Ask students to utilize Adobe Express to generate an image of a literary character or setting and then reflect on how well or poorly the Gen AI output represents that person or place based on what they have learned from the lectures and other course activities. 

Ask students to screen record a written and/or spoken conversation with Microsoft Copilot where Copilot takes on a unique personality (e.g. a waiter at a restaurant, a traveler on a train, etc.) and highlights any grammatical errors made by the student. Then, ask the student to reflect on their language experience conversing with a Gen AI chatbot or on specific questions surrounding intercultural competency and what questions generative AI might raise. Alternatively, you might ask the student to list the errors that Copilot identified and then to correct and explain the nature of those errors. 

Ask a student to generate an image in Adobe Express in the style of an artist being studied. Then, have them compare and contrast the ability of AI to capture nuance, texture, perspective, or other theoretic constructs. 

Additional Resources

Ohio State Resources: 

External Resources: