I grew up in one of the suburbs of Salt Lake City, Utah. My mother worked as a piecemeal seamstress, my father worked driving locomotive and painting military camouflage (yes, that's two 40hr weeks, plus overtime). I read a lot of books, tossed a lot of boxes out of trailers, and even spent a bit of time building railroad. I bounced around the US for a few years, and eventually earned a Ph.D. in philosophy from The University of North Carolina - Chapel Hill. After that, I spent two years as a post-doc in the department of psychology at the Cognitive Evolution Laboratory at Harvard University; I spent one concurrent year as a postdoctoral associate in the Center for Cognitive Studies at Tufts University (I figured that my father had held two jobs at once, I might as well too). Those two years have impacted my research most strongly, and the things that I learned by talking with Dan Dennett and Ray Jackendoff have had an incredibly strong impact on the way in which I approach philosophy. Since the fall of 2009, I have been an Assistant Professor in the department of philosophy at Georgetown University.
My research is incredibly interdisciplinary. I publish work in both philosophical and psychological venues, and I have a bit of a fetish for statistics and reinforcement learning. My research tends of focus on a cluster of interrelated issues in moral psychology and the (ruthlessly naturalized) metaphysics of mind. I carried out a number of empirical studies on the cognitive strategies people employ in making moral judgments, though I have become skeptical of the possibility of using such results to uncover the computational principles of the moral mind. More recently, I have been thinking about the ways in which neuroscientific and computational research on learning and motivation might be able to fund a more plausible account of moral cognition. I hope, eventually, to be able to contribute to the understanding of how we make moral decision in situ. I am also doing more theoretical research on the architecture of the mind, more broadly. Specifically, I am interested in the extent to which genuinely cognitive capacities can extend beyond the bounds of skin and skull. Are the people and objects in our environment scaffolding for the production of our mental states? Or, more intriguingly, do groups of people and their technological environment sometimes constitute extended cognitive systems? I tend to think that the more intriguing claim is true in some limited cases, and I have recently finished a book manuscript in which I argue that the tools of the cognitive sciences can be brought to bear on the purposeful behavior of groups--in some cases, I argue, groups have minds in precisely the same sense that individuals do!