Research Statement
My work explores how computational methods can enhance our understanding of cultural phenomena, with a particular focus on developing novel methodological approaches for analyzing media, texts, and visual culture at scale. I approach these methodologies critically, questioning their embedded biases and limitations, especially regarding race, gender, and class. Overall, my interests can be divided into four pivotal areas that reflect contemporary challenges in media studies, the digital humanities, and data science.
Computer Vision and Image Clustering
My research investigates how computer vision and machine learning techniques can revolutionize our understanding of visual culture and media history. By developing innovative methodologies that combine computational analysis with traditional humanities approaches, I explore how advanced algorithms can reveal previously hidden patterns in visual media and challenge established interpretative frameworks. Currently, I am leading three interdisciplinary projects that demonstrate the transformative potential of this approach
The first project combines advanced machine learning techniques with cultural analysis to examine the visual rhetoric of Jack Chick’s influential yet controversial comic tracts. In collaboration with a colleague in religious studies, we employ convolutional neural networks to identify recurring visual patterns and motifs across Chick’s extensive body of work. Our methodology integrates UMAP (Uniform Manifold Approximation and Projection) for dimensionality reduction and HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) for pattern clustering, revealing how these publications systematically deployed specific visual strategies to spread religious extremism and discriminatory ideologies. This computational approach, combined with close reading and historical context, demonstrates how seemingly simple comic art was a powerful vehicle for propagating misinformation and hate speech throughout the 20th century.
The second project examines the evolution of video game box art from the 1970s to the present day, offering insights into changing cultural values, marketing strategies, and consumer expectations in gaming culture. Using Microsoft’s Florence 2, a state-of-the-art visual foundational model, I analyze an extensive dataset comprising over one hundred gigabytes of video game packaging imagery. This computational approach generates detailed captions for each image, capturing nuanced visual elements from character positioning and color schemes to typography and artistic style. By applying sophisticated topic modeling algorithms to these captions and multimodal clustering techniques, I trace distinct patterns in visual presentation across decades, genres, and cultural contexts. The analysis reveals fascinating shifts in marketing approaches, gender representation, technological fetishism, and artistic influence.
The third project centers on how computer vision intersects with my notion of “filmic flow.” Filmic flow posits that the objects and materials in a film frame can be understood as a network of interactions between objects and actors. Early work on this is published in the Journal of Open Humanities data as it relates to the films of Alfred Hitchcock and how they circulate as memetic images. I am writing a follow-up examining how backbone projection, a technique that utilizes Monte Carlo simulation to determine statistically significant nodes in a bipartite network, can provide a more nuanced understanding of this “flow.” This research presents a completely unexplored methodology for studying visual culture that has the potential to challenge our understanding of cinematic narrative. In addition to its academic contributions, it has practical implications for fields such as media studies, marketing, and even filmmaker training, which I am exploring.
Machine Learning and Statistical Methodologies in the Humanities
One of my chief reserch focuses investigates how machine learning techniques can be applied to humanities research, enhancing our understanding of cultural phenomena. My book manuscript, Cultural Analytics in R: A Tidy Approach, offers a methodological framework for managing large-scale cultural data in the humanities. Specifically, it examines how scholars in the digital humanities and media studies can employ computational methodologies in data science, such as network analysis, multivariate regression, natural language processing, and neural networks using the R programming language to investigate cultural phenomena where close qualitative research might be impractical or misleading. A central theme of this work is the need for humanities scholars to consider the structure of complex datasets. This book is under contract for publication with SpringerLink in their series on Quantitative Methods in the Humanities and Social Sciences and has an expected publication date of Fall 2025. The book manuscript has been peer-reviewed and is currently in the final stages of revision.
In addition, I am interested in understanding the role of statistics in the humanities and how best to teach these methodologies to students who may not have a background in quantitative analysis. For instance, in my article “Minimal Research Compendiums: An Approach to Advance Statistical Validity and Reproducibility in Digital Humanities Research,” published in the International Journal of Digital Humanities, I outline the difficulty and lack of statistical training in the humanities. I reflect on how many humanities students have deep misconceptions about what statistical analysis seeks to answer and how it can be used to support their research. Consequently, I propose a new approach to teaching statistics that emphasizes the importance of reproducibility and transparency in research and focuses on the General Linear Model.
GIS and Large Language models
A central thrust of my research agenda involves leveraging advanced computational methods—particularly geospatial analysis and large language models—to expose systemic inequities and challenge entrenched historical narratives. By combining critical theory with digital tools, I demonstrate how computational approaches can serve as powerful instruments for social justice research while remaining mindful of their limitations and potential biases. This methodological framework allows for both quantitative rigor and qualitative depth in examining complex social issues.
My award-winning project “Are We There Yet?” exemplifies this approach. Recognized with the First Runner-up Digital Humanities 2021 Award for Best Exploration of DH Failure/Limitations, this research combines computational linguistics, geocoding, and data journalism to create an innovative GIS time-lapse visualization. The project reveals how academic conferences function as sites of power concentration and exclusion, mapping both geographical and social barriers to participation. Building on this foundation, I am now expanding the analysis through the lens of critical feminist theory and machine learning to examine gender-based disparities in academic participation and recognition. This extended analysis, which has completed peer review and copy-editing, will appear in a forthcoming edited collection prior to the position’s start date.
In a related project examining racial justice, I am developing innovative methodologies to analyze patterns of police violence against African Americans in the United States. This research introduces a sophisticated application of hierarchical hexagonal spatial indexing (H3), advancing beyond traditional mapping approaches to enable more nuanced spatial analysis. The H3 system provides standardized data binning across multiple scales, facilitating precise comparative analysis and revealing previously obscured patterns of systemic racism. Supported by internal university funding, this project has garnered significant interest, prompting plans for expanded research through NEH funding.
My most recent project introduces the concept of “deep mapping,” an innovative methodology that harnesses large language models to uncover implicit geographical information within historical narratives. Applied to the “WPA Slave Narratives” collection, this technique reveals previously hidden patterns of movement and displacement among formerly enslaved individuals. Beyond its methodological contributions, the goal is to showcase the sites of bondage and freedom for these individuals and demonstrates that most enslaved individuals left their areas of bondage after emancipation, dismantling a “Lost Cause” myth that the lack of movement of formerly enslaved individuals demonstrates benign pre-Civil War conditions.
History and Culture of Digital Media
My second book monograph, The Computer Goes Home: A Failed Revolution, examines the computer’s domestication in America during the 1970s and 1980s. This project demonstrates my ability to combine traditional humanities methodologies with computational approaches as I employ archival research and data mining techniques to analyze extensive collections of historical documents. In this work, based on my dissertation research, I look at the rhetoric surrounding the computer’s domestication in American society. In contrast to journalistic and hagiographic accounts focused on a select group of Silicon Valley “visionaries,” I rely on newspaper coverage, hobbyist magazines, popular media, and advertisements to read “against the grain” and recover voices overlooked. Using natural language processing and topic modeling, I analyze patterns across thousands of primary sources to reveal previously hidden narratives about computing’s social impact. By doing so, I underscore how the device created a media infrastructure reflecting a reactionary response to the political radicalism of the 1960s.