top of page

Research

My interdisciplinary research draws from political theory and interpretive methods to address the challenges and opportunities surrounding emerging technologies, particularly in the context of quantified self-tracking and climate change. 

Published Work

Billesbach, G. (Forthcoming March 2025).

The Politics of Predictive Technology in the Intergovernmental Panel on Climate Change.

In J. Gellers & H. Saetra (Eds.), Oxford Intersections: AI in Society, Environments. OUP.

The IPCC is a central node for a diverse group of actors interested in the politics of climate change. At the interface of science and policy, it is founded on three principles: being policy relevant, never policy prescriptive, enlisting geographically diverse participants, and being transparent about its procedures. Nonetheless, humanist critics of technology and technology enthusiasts alike critique the IPCC, noting the political implications of its predictive technology. For some, the IPPC’s use of machine learning algorithms and the outputs they produce rely on an underlying technocratic logic. As such, the IPCC’s supposed neutrality masks a universal framework that is harmful to democratic politics because it flattens regional variation and Indigenous knowledge and forecloses non-quantitative approaches to the world (e.g. poetry and narrative). For others, general circulation models are not technologically advanced enough and are thus blunt instruments in need of replacement by novel AI. On this account, the IPCC is marred by human flaws and does not defer to technology enough. This essay investigates underlying tensions in the IPCC concerning technology and politics. Specifically, it analyzes leadership, reports, and original interviews conducted with climate scientists. These sources illustrate how predictive algorithms and expert-rule are prominent in the IPCC but also highlight meaningful attempts to incorporate regional differences and non-quantitative outputs. The climate scientists interviewed frequently acknowledge the tension between technocratic and humanistic approaches. Indeed, these scientists often think in humanistic or poetic ways, even as they display optimism about novel predictive technology. We should avoid rule-by-algorithm shortcuts in matters of governance, while remaining open to creative ways of wielding AI to inform sustainable practices.

Book Reviews and Public Writing

Constellations: An International Journal of Critical and Democratic Theory

Review Essay: Why AI Undermines Democracy and What to Do About It. By Mark Coeckelbergh. Polity, 2024; and Algorithmic Institutionalism. By Ricardo Mendonca, Virgilio Almeida, and Fernando Filgueiras. Oxford University Press, 2024. Constellations, Forthcoming 2024.

Radical Philosophy

Critical Inquiry

Florida Undergraduate Law Review

Working in Progress

Invocations of Freedom in the Context of Predictive Algorithms

This essay assesses the political implications of predictive technology. It develops two arguments. First, that traditions of negative, positive, and republican liberty are invoked in critical algorithm studies. In turn, it argues that algorithms pose challenges for two other conceptions of freedom, namely, those developed by Charles Taylor and Hannah Arendt. I critique the behaviorists logic undergirding the deployment of algorithms for the purpose of prediction, that greater prediction means greater freedom. At stake is the exercise of self-interpretive freedom and political action. Understanding freedom in these terms best illuminates the political risks of predictive algorithms and potential remedies. It also better situates us for the critical investigation of specific cases, such as self-tracking technologies and global climate change institutions. 

Self-Interpretive Freedom and the Quantified Self

The Quantified Self (QS) movement, encouraging individuals to track and analyze their lives through digital technologies, challenges Charles Taylor's notion of self-interpretive freedom. Taylor argues humans are unique as self-interpreting beings, shaping identities through temporal, dialogical, and embodied exercises. QS technologies—like Fitbit, smartwatches, or mood-tracking apps—may undermine this process by reducing complex experiences to quantifiable data. These algorithms prioritize precision and prediction, often at the expense of narrative-based self-understanding. As QS technologies predict behavior more accurately, individuals may become more predictable or manipulable. We must not relinquish creative, reflective practices, which are important for self-interpretation and political agency. While QS technologies promise self-knowledge, they risk limiting the interpretive freedom needed to construct meaningful identities. Relying heavily on algorithmic predictions may prioritize logic and isolation over narrative richness and diverse experiences. Yet whether these practices truly narrow our capacity for self-authorship or political action remains open. QS technologies are not inherently detrimental; used reflexively, they can support self-awareness and contribute to personal growth, well-being, and community. The challenge is how to effectively embrace technological advancements while sustaining our interpretive faculties.

AI and the Exercise of Freedom: The Politics of Personal &Planetary Prediction

The primary aim of this book project is to address the personal and political implications of algorithms as they become pervasive and increasingly more accurate and efficient in making predictions. Algorithms are becoming more accurate and efficient as they are effectively paired with more data. I engage with the literature on algorithmic bias, algorithmic accountability, and overt political uses of algorithms because they are crucial illustrations of how algorithms impact our personal and political lives. My focus, however, is neither the injustices that occur when algorithms work poorly, in the sense of replicating bias, nor the injustices that occur when algorithms are deployed for political purposes that overtly infringe on rights or liberties. Rather, drawing from Hannah Arendt and Charles Taylor, I employ ideas about interpretive and political freedom to study algorithms as they are openly and successfully deployed for the purposes of prediction. Through interviews and conceptual analysis, I examine two cases in depth: the Quantified Self (QS) movement and the Intergovernmental Panel on Climate Change (IPCC). IPCC members deploy algorithms to create knowledge and make predictions about the planet. QS members utilize algorithms to create knowledge and make predictions about themselves. IPCC and QS members associate algorithms with improved personal and planetary welfare. The question my dissertation addresses is whether that improved welfare undermines or enhances human freedom, and how examining these cases together—IPCC and QS—can offer insights into both personal autonomy and broader issues of climate politics and democracy.

bottom of page