GenAI: Grad Students Engaging with Tomorrow’s Tech

Graduate students are embracing the topic of artificial intelligence in their research, projects, and areas of inquiry.

Oxford, MS – With the AI revolution upon us, graduate students are already deeply involved in its development and use. Several students, from different disciplines, are conducting research into artificial intelligence, evaluating the ethics surrounding it, or drawing inspiration for their creative products.

Artificial intelligence is reshaping nearly every aspect of daily life. From the way we communicate to the way we work, learn, and create. Yet, much remains unknown and unexplored as the technology continues to evolve. Graduate students researching artificial intelligence are stepping into this new normal, navigating an ever-changing technological frontier. Their work examines what AI can do, where its limits lie, and how it can drive innovation in our world. We asked a few of our graduate students to discuss their research in this field and to share their thoughts on artificial intelligence in general.

Developing Gatekeeper Governance

Saeed FanoodiOur first featured student researcher is Saeed Fanoodi. Originally from Iran, Fanoodi is pursuing a Ph.D. in Management at the University of Mississippi. His interest in AI stems from his lifelong passion for technology. “I’ve always been a bit of a tech nerd! Before starting my Ph.D. in Management at the university, I co-founded a tech startup, which really shaped my curiosity about emerging technologies. Back then, I knew AI existed but didn’t fully grasp its potential. When I began my Ph.D. in Spring 2023, it just so happened to coincide with the public release of OpenAI’s GPT-3. So the timing was perfect. As I started learning more about large language models, I became increasingly fascinated. The more I explored, the more I realized just how powerful and groundbreaking these technologies are, specifically for business and society as a whole,” he explained. His research observes AI from two different angles: “how it works, and how it shapes the world around us.” Fanoodi stated, “On the technical side, I study how new tools like large language models can help researchers analyze massive amounts of text. This opens up fresh ways to understand human behavior, organizations, and society. At the same time, I’m interested in the bigger picture of how AI is governed. For example, I’ve developed a “gatekeeper governance model” to explain how certain powerful players in the AI ecosystem can influence not just innovation, but also accountability and ethics. My dissertation builds on this by exploring competition among AI firms, asking how the race to develop smarter systems affects strategy, markets, and ultimately, the future of technology.”

Creating Realistic Images with Generative Adversarial Networks

MaTais CaldwellAnother student contributing to AI research is MaTais ‘Taz’ Caldwell, a Ph.D. student in Computer and Information Science who hails from Alexander City, Alabama. His research examines deep learning and artificial intelligence. He was first drawn to AI during his undergraduate career. Caldwell said, “I joined the AI Club, where I had the opportunity to study various AI techniques and applications. Participating in club activities, such as coding challenges and hackathons, sparked my curiosity about the potential of AI to solve real-world problems.” He also took courses on machine learning and data science that facilitated his interest in research on AI, especially in the area of deep learning. His current research into AI “explores new ways to make artificial intelligence models better at creating realistic data, such as images.” He explained, “A popular approach for this is called Generative Adversarial Networks (GANs), which work like a game between two players: one creates fake images, and the other tries to tell if they’re real or fake. While powerful, this process can be unstable and hard to train. My work introduces a new method we call EmbeddGAN, which changes how this ‘game’ is played. Instead of using a traditional judge that simply labels images as fake or real, I use an embedding network that looks at patterns in the data and measures how similar real and generated images are in a shared space. This approach uses a mathematical tool called Gini Distance Correlation to guide the training process.” Why does this matter, you may ask? Caldwell elaborated, “This matters because by focusing on relationships between data rather than a simple yes/no judgment, EmbeddGAN offers a fresh perspective on how AI can learn to generate realistic content. In our experiments, this method trained faster than traditional models and performed well on simpler datasets like handwritten digits. While it still faces challenges with more complex images, the results show promise for improving efficiency and stability in AI training. In the future, this approach could be extended beyond images to areas like text, audio, or even privacy-preserving data generation, opening new possibilities for research and real-world applications.”

Exploring AI in Literature and Film

Gabrielle BowdenGabrielle Bowden, originally from Gulfport, MS, brings a different approach to her research on AI. She focuses on “English Literature, Media Studies, and Digital Humanities” as she pursues a Ph.D. in English. When asked how she became interested in AI, Bowden said, “I’ve always been interested in new technologies, mediums, and cultural productions.

AI fascinates me because it’s language-based.” Her research explores what books and films can teach us about AI and its impact on society. “I am primarily interested in depictions of AI in literature and film, particularly instances in texts where artificial intelligence and data storage intersect. I approach these narratives as imaginative laboratories through which we can better understand the technological and ethical challenges that define our present and our future,” Bowden explained.

Proofreading Financial Errors with AI

Erika DevoreErika DeVore, a native of western Kentucky, explores the use of AI to enhance accuracy in the accounting field as she completes her Ph.D. in Accountancy. When asked about how she became interested in AI, she shared, “I’ve always been curious about how computers process information and how that differs from how humans do. Especially with AI, I enjoy staying up-to-date with the latest ways we can utilize computers’ capabilities to find answers to questions in financial analysis that we couldn’t answer before.” Her research involves teaching computers to serve as financial proofreaders. “Companies report their machine-readable financial data using digital ‘tags,’ like a tag for revenue or one for expenses. My work shows that these tags are often applied incorrectly. So, I’m using AI to scan a dataset of 32 million tags and automatically flag any that look questionable or misleading. This is really important because the tagged data is the base input for automated trading models and risk assessments that can impact the market,” she explained.

Each of our graduate students noted concerns despite their interest in AI. Fanoodi described himself as “pro-AI,” but he also expressed his reservations about the technology. He noted, “The biggest [concern] is the lack of global consensus on how to regulate AI and deal with the misinformation it can generate. Different countries take very different approaches to regulation, and so far, these perspectives haven’t converged. Current governance mechanisms are often impractical, and that’s one of the main reasons I became interested in studying AI governance in the first place. My worry is that without coordinated global frameworks, we could face consequences that are both severe and difficult to reverse.” Caldwell also noted concerns related to current government regulation. In addition, he raised concerns about AI’s potential to reshape the job market, exacerbate inequities, and endanger privacy. He said, “I am concerned about the ethical implications of AI, such as bias in algorithms that can perpetuate existing inequalities and stereotypes. Ensuring that AI systems are transparent and accountable is crucial. Data privacy is another major concern, as AI systems often rely on large amounts of personal data, raising questions about how this data is collected, stored, and used. Many of these big companies who train these large models do not have the best track record when it comes to data privacy and copyright issues.”

For DeVore, the biggest concern is trusting AI in lieu of experts. She stated, “In my own research, while I use AI to analyze data, I know my study extensively and manually code training data to train the model. As AI becomes more accessible, I worry people may implicitly trust its output without having the foundational knowledge to identify hallucinations. We need to emphasize that you are always first and foremost the expert.” DeVore said she thinks that in the future, “AI will mainly function as a sophisticated search engine, similar to how we use calculators for complex math problems.” Bowden, who also worries about AI’s effects on how we trust information, raised concerns about how AI may change the way we think. She stated, “I worry about the majority of the population losing metacognition and literacy skills. I worry that trust will disintegrate totally, that facts will be even more impossible to distinguish from fiction, and that we will rely on algorithms to parse through a bloated sea of information. Of course, that inherently means that the corporations running the algorithms will be able to determine reality.”

Beyond concerns about accuracy, regulations, and privacy, Bowden thinks AI might affect consumer preferences in the entertainment world. “I wonder if the introduction of AI to the media landscape will ultimately turn a lot of people away from digital media entertainment and towards live events. At the same time, a portion of the population may disconnect with the real world entirely. The question of whether we should incorporate AI into certain realms–national security, economics, education, healthcare, the justice system, etc.–will likely transcend current party affiliations and create unlikely political coalitions,” she said.

Despite their reservations, our graduate student scholars believe research into and with AI is valuable and shared advice. Bowden wants other graduate students conducting research on AI to remember, “Our cultural imagination has been contemplating artificial intelligence for well over a century. Start there!” On conducting research into AI, Caldwell suggested, “Conducting research in AI is challenging. It’s important to stay updated with the latest research by reading academic papers. Joining research groups or labs can provide valuable mentorship and collaboration opportunities. Lastly, be prepared for a lot of trial and error. Research often involves experimenting with different approaches and learning from failures. Persistence and curiosity are key to making progress in this field.”

DeVore and Fanoodi encouraged other students to remember that AI is a tool. Fanoodi said, “AI can be incredibly helpful, but it can also be confidently wrong, meaning sometimes the mistakes are so convincing that they slip by unnoticed. That’s dangerous in research, because AI-generated content is becoming almost indistinguishable from human writing. If we’re not careful, we risk building entire streams of literature on shaky or even false information.” DeVore added, “You are always the expert in the room and need to understand your study thoroughly before using any AI to help push the current limits of your field.”

More broadly, Fanoodi noted, “I would encourage students to embrace AI as much as possible, because it’s not going away, and the sooner you become AI-literate, the more beneficial it will be for your studies and career. Working with generative AI is very experiential; it requires patience, trial, and error. But once you learn how to use it responsibly, you’ll begin to see its real power and the opportunities it can unlock.” DeVore, who is optimistic about AI and its future, shared, “I believe [AI] will fundamentally accelerate the scientific process across numerous fields, enabling us to solve problems that were previously too complex or data-heavy. I’m excited to see the various ways it will help in advancing our knowledge and understanding of the world around us!”

For those students interested in learning more about AI, the university offers several resources. One example is the University Libraries’ portfolio of AI tools. Another is the AI Institute that UM offers each term. The institute is an initiative of the Academic Innovation Group (AIG), which is taking the lead on AI at the university. A regular presenter at the institute is Mr. Marc Watkins, the Assistant Director of AIG and Lecturer in Composition and Rhetoric. He teaches writing and critical AI literacy and researches “how students and faculty apply AI in learning and teaching spaces.” When asked what the future of AI looks like to him, he answered, “likely integrated in most of what we do daily—for better or for worse.” He advises that students “be open to new ideas, be aware that this technology is changing rapidly and what we know will likely change just as quickly.”

To close out — with a little help from AI, fittingly — “one theme stands out across these students’ experiences: artificial intelligence is only as meaningful as the people who study it.” AI is “already sitting in the lab with us, annotating our datasets, rewriting our assumptions, and occasionally hallucinating its way into our drafts.” However, it’s the research of UM graduate students that is “pushing boundaries, questioning assumptions, and helping society navigate the promises and pitfalls of an increasingly automated world. If AI is the next frontier, then graduate students are the ones charting the map.”(OpenAI, 2025)

 

 OpenAI. (2025). ChatGPT (Dec 7, version 4.0) [Large language model]. 

By

Emma Taylor

Campus

Published

February 13, 2026

Topics