top of page

Current Research Projects

My primary research interests are in ethics, philosophy of action, and epistemology. Much of my work involves applying Anscombian and Kantian insights about action to puzzles and problems in ethical theory. 

Moral Worth

In my dissertation, I develop an account of moral worth centered around the basic insight that moral worth is the practical corollary of knowledge.

​

Standard approaches to moral worth—whether they require that agents act for the right-making reasons or for reasons of rightness—secure moral worth by requiring agents act for the correct end.  I argue that all such approaches fail to explain the way that categoricity enters into morally worthy action. Categoricity cannot be secured by the content of an agent's ends. For it does not matter what I desire if I desire it in the way I might desire a mushroom omelet. Morally worthy action is distinctive, not just in what an agent cares about, but in how they care for these things.

 

To know a conclusion, it is not enough that one reaches the right conclusion from the correct set of premises. For instance, you might still fail to know if you infer using a fallacious principle of reasoning or infer from true premises supplied by hallucination. Knowledge requires, not just that an agent reason from true premises that entail the conclusion she draws, but that truth play the right explanatory role in the formal structure that underpins a thinker's reasoning.

 

In the same way, I argue that to act with moral worth it is not enough to act for the correct reason. I might still act poorly if I act from a vicious disposition or without clear moral perception of my act's categorical necessity. Moral worth requires, not just that an agent have the right end, but also that this rightness play the right explanatory role in the formal structure that underpins an agent's practical reasoning.

 

Further developing the analogy between knowledge and moral worth, and using insights from Williamson's knowledge first epistemology, I then argue for a ``moral worth first'' approach to rightness. Morally worthy action cannot be analyzed as right action plus certain independently specifiable features. Rather, just as knowledge is the most general factive state, so morally worthy action is the most general essentially rightive activity.

Constraints

In this work, I develop a distinctively Anscombean approach to deontological constraints. I argue that constraints do not provide considerations against an action, but rather structure what actions we consider in the first place. Moral constraints do not emerge from the moral evil of constraint violation (there need not be anything worse about murder than an unintended human death), but rather emerge from the formal structures of practical deliberation. 

​

Consider, as an example, the relevance of knowledge to certain constraints. It is, ordinarily, wrong to punish someone unless you know they are guilty. But why? What a juror knows does not bear on whether a criminal deserves punishment. Deliberatively, jurors should care about whether the evidence reasonably demonstrates the guilt of the accused, not about their own mental states in reacting to that evidence. Similarly, it is, ordinarily, wrong to perform surgery unless you know the patient has consented. Yet a patient’s right to give informed consent is violated when surgery is performed without consent, not when it is performed without knowledge of consent. 
 

We’ve seen this oddity before in epistemology. If I say "it’s raining," a perfectly sensible reply is "you don’t know that." And yet that reply does not actually bear on what I asserted. My lack of knowledge is not strong evidence that the sun is shining. 
 

The solution in the moral case is, I argue, the same as in the epistemic case. In theoretical reasoning, knowledge shows up, not in the content of my belief, but in the form of what a belief is. My lack of knowledge bears on whether I reason well because of the formal knowledge norm governing belief. Similarly, in practical reasoning, I don't have reason to punish only those I know to be guilty. Rather, I have reason to punish only the guilty, where my practical reason is formally governed by the knowledge norm of action. 

 

I argue that something similar is true of other features of deontological constraints. For example, the intention/foresight distinction is explained, not in there being something particularly bad about intended killings, but rather because of the formal differences in how intentions enter into practical deliberation. 

Applied Ethics

This approach to constraints has important implications for applied ethics.  

In one manuscript, I  show how the knowledge norm of justice helps explain the injustice of racial presumption. In another piece, coauthored with Tucker Sigourney, we show how a correct understanding of the constraint against theft implies a radical duty of charitable aid.

​

Ethics and Algorithms

I am particularly interested in what this knowledge norm means for computerized algorithms. There is a strong pragmatic reason to use computerized algorithms to decrease the noise in human decision making. For example, it is likely that in the near future lethal autonomous weapons will be more accurate than your average soldier at classifying combatants and non-combatants. However, there is a moral puzzle raised by the knowledge norm, since it is unclear whether a probabilistic algorithm can ever know that someone is a combatant.

 

I argue that this knowledge problem is the best explanation of what underlies the persistent intuitive objection many have to certain uses of algorithmic decision making. The knowledge problem, in particular, helps explain why we are troubled by some uses of computer algorithms but not others. We are unsettled by the choice to let computers decide whether to target someone in war, whether to fire teachers for poor performance, or what sentence to give a criminal; in such cases, we think it is important that human judgement stay `in the loop.' In contrast, we are perfectly comfortable using the Apgar Score to decide which infants are taken to the NICU; in fact, we think it important that doctors not try to overrule the algorithm using their less reliable professional judgment. What unites the troubling cases, I argue, is that they are all cases where a knowledge norm is in play. You should know that someone is a combatant before targeting them, you should know someone is a poor teacher before firing them, and you should know what a criminal deserves before delivering a sentence. However, we are also unwilling to attribute knowledge to the results of probabilistic algorithms. This explains, for example, the intuition that algorithmic decision making fails to treat people as individuals. It fails to treat them as individuals because you are not basing your decision on what you know about the individual, but rather on the generalizations you know about those like the individual.

 

We thus face a conflict between the demands of statistical accuracy and the demands of knowledge. On the one hand, humans are more likely than computers to make classificatory mistakes. But on the other hand, when humans don't make mistakes, we are more willing to say that they know that their classification is correct. It is tempting to think we should, following someone like David Papineau, just give up on knowledge and go all in on statistical accuracy. After all, what we really care about is preventing civilian deaths. I argue that this would be a mistake, we cannot abandon the knowledge norm without the whole edifice of justice falling down on top of it.

Virtue

How does a generous person's practical reasoning differ from a clever miser's? The two needn't differ in beliefs about ways to help others. Rather, they differ in how they see opportunities to help. The generous see others' need in the way of a generous person, as essentially connected with practical inclinations; whereas the clever miser sees their need merely as material for possible exploitation. In work in progress, I explore the action theoretic foundations of this ``connatural'' knowledge of action as well as its parallels in epistemology.

 

New Creation
I'm particularly interested in how to understand virtue within a religious framework. If humans will ultimately live forever in God's perfected kingdom, that poses a challenge for understanding the value of virtues. Most explanations of the value of the virtues cite the role virtues play in ameliorating evil conditions and maintaining good ones. For example, courage seems to principally helps us deal with a world in which death is a reality. But this poses a problem for the Christian ethicist. It seems that the virtues would lose their value once we enter the new creation. For example, what use is courage when we will no longer confront death nor pain? 

​

In my paper "And All Shall be Changed"—forthcoming in Oxford Studies in Philosophy of Religion—I try to give a principled explanation of how our understanding of the new creation can fruitfully inform our understanding of virtue. I argue that the virtues dispose us to enjoy ideal conditions. In another manuscript, I show how this account can provide a non-retributive explanation of orthodox Christian eschatology. 

bottom of page