My research sits at the intersection of philosophy, technology, and ethics, with a particular focus on how artificial intelligence (AI) and autonomous systems challenge traditional understandings of key philosophical concepts and ethical practices.
The introduction of AI and autonomous weapon systems presents a fundamental challenge to military practice. My work examines how these technologies challenge traditional understandings of the military virtues and the ethical frameworks that shape armed conflict.
I'm particularly interested in how the introduction of AI and autonomous systems can be best managed. Militaries should adapt their understanding of which virtues are relevant to which personnel, and what these virtues involve (I have argued that traditional virtues, like courage, continue to be relevant, but that broader understandings of what they involve should be developed). Similarly, as AI and autonomous systems are more widely used, increased attention must be paid to epistemic and technomoral virtues.
Beyond military applications, I investigate broader questions about how we should live with AI and other emerging technologies. This work examines concepts like responsibility, agency, and harm in human-AI interactions, drawing on both analytic philosophy and practical ethics.
A central theme is the challenge that AI poses to our traditional philosophical concepts and ethical categories. I have recently argued that the peculiar status of AI systems—roughly, that they can be causally but not morally responsible for significant harms—provokes a risk of hermeneutic harm, a concept I developed in relation to the reactive attitudes in a 2024 paper, and which I continue to develop in ongoing collaborations with Lode Lauwaert, Ann-Katrien Oimann, Fabio Tollon, and Sonja Spoerl. My current work-in-progress touches on AI assertion (which I think is possible, given a correct account of assertion) and artificial virtues (which I do not think are possible).
Drawing on my experience in the private sector (I worked as a researcher for organisations in Italy, Denmark and Norway, and set up consultancies in the UK and Belgium), I examine how ethical principles translate into practice in research and technology development and deployment. My work has addressed technologies including biometric systems, security and surveillance technologies, emergency response and disaster management tools (especially in relation to CBRNE crises), as well as tools to ensure effective informed consent in clinical trials.
In this domain my research emphasises the importance of embedding sensitivity to ethical, legal, and societal considerations in design processes from the earliest stage possible. I'm particularly interested in how the (often unhelpfully abstract) ethical principles listed in ethics guidelines or legal frameworks can be operationalised in real-world technological systems.
I did my doctoral (DPhil) research at the University of Sussex, where I had the very good fortune to be supervised by Murali Ramachandran and Michael Morris, and examined by Peter Sullivan and Sarah Sawyer. My research focused on the origins of Bertrand Russell's Theory of Descriptions which, despite having since been primarily used in the philosophy of language, was developed within the context of Russell's work on logicism, specifically in his attempt to resolve the Russell Paradox. My dissertion includes a novel interpretation of the notorious 'Gray's Elegy Argument' in Russell's 'On Denoting'.
I maintain a keen interest in early analytic philosophy, as well as in philosophy in general.