Workshop: Deference and Technology
9-10:20am: Allan Hazlett (WashU), "Deference and Understanding.” It has been suggested that there are situations in which you ought not defer to someone because your deferring to them would prevent you from understanding something. The idea is that deference, in the relevant cases, results in the deferring subject ceasing to inquire about the topic at hand. The case of deferring to AI chatbots seems like a case in point: the whole point of these technologies is to free up the user’s time to do tasks other than inquiring. However, I will argue that it is never the case that you ought not defer to someone – or something – because your deferring to them would prevent you from understanding something. This is because of a necessary connection between “ought” and reasons, a necessary condition on reasons, and a limitation on our ability to suspend judgment.
10:30am-11:50pm: Maria Waggoner (Purdue University), “Socially Extended Moral Understanding and Outcome Homogenization in GenAI.” This paper argues that a society that heavily relies on the use of GenAI will be faced with outcome value homogenization, or a loss of diversity of our moral values. One plausible effect is that the diversity of moral minorities, or groups of people that hold differing moral values, will disappear. I argue that, at the very least, this will negatively affect one’s ability to internalize certain moral norms or values and thereby can negatively impact one’s moral understanding. Specifically, the worry is that it will threaten one's ability to acquire the affective component of moral understanding, and so a new kind of affective injustice may surface.
1:40pm-3pm: Luis Rosa (WashU), “Can Computers Know?” In order for a hearer or reader to know some fact on the basis of what a testifier says or writes, the latter must have knowledge of that fact themselves. That is what an influential view in the literature on testimony says—the so-called Transmission View. The idea is that a testifier cannot give someone knowledge if that testifier doesn’t have that knowledge to begin with (nobody can give what they do not have). Now consider the question whether we know things on the basis of what computers tell us—say, using a digital calculator, Google Maps or Chat-GPT. Assuming that the Transmission View is right, in order for us to acquire testimonial knowledge on the basis of what computers tell us, those computers must know what they are telling us. But can computers have knowledge? In this talk, I will argue that they can and they actually do. That requires clarification about what exactly a computer is, however, as well as clarification about the conditions under which the computer itself tells us something. Surprisingly, in explaining how computers know things, we will find out that there are no counterexamples to the Transmission View about testimonial knowledge (for example, the famous case of the creationist teacher who informs their students about Darwinian evolution).
3:10-4:30pm: Graham Curtiss-Rowlands (WashU), "Reasoning with Robots about Politics." Will LLMs, like chatGPT and Gemini, enhance our autonomy, or merely compromise it? In this talk, I consider the potential of LLMs to enhance our autonomy in a domain where fears of compromised autonomy are especially pertinent: political reasoning. First, LLMs can provide helpful scaffolding for thinking through which candidates and policies will actually align with our values and priorities, so we may make more autonomous choices. Second, to the extent that being enmeshed in conspiracy theories, echo chambers, and epistemic bubbles compromises our capacity for autonomous political reasoning, interactions with LLMs may help by counteracting these tendencies, according to recent research. With that said, we should be clear-eyed about the very real dangers of integrating LLMs into our political reasoning. To ensure that LLMs enhance rather than compromise our ability to reason about politics autonomously, safeguards must be in place to avoid capture of LLMs by partisan interests. Additionally, efforts must be taken to ensure that LLMs maintain factual accuracy even if that may lead to perceptions of bias.
4:40-6pm: Robert Howell (Rice University), “AI and the Erosion of Virtue.” The reliance on AI and the deference to its processes is a threat to our epistemic and moral virtues. While we should not neglect its potential, we must be careful about how and when we integrate it into our lives. The AI economy is currently picking low hanging fruit, integrating AI into every app and utility, without regard for the impact on the humans who use it, and often without clear use cases or partitions. From the perspective of developing human potential, this is precisely the wrong approach.
Sponsored by the Redefining Doctoral Education in the Humanities Initiative.