Introspection is assumed to provide us with reliably accurate data about conscious experience. Accordingly, we continue to take introspective reports at face value and then deploy them as unquestioned constraints on consciousness or as a starting point for substantial theorizing. But what if our data is not accurate? My concerns do not derive from a general skepticism about introspection. Indeed, I see introspection as no different from other discrimination and detection capacities like perception: there are contexts in which such capacities are accurately deployed and others where they are not. My concern is more specific. We do not know whether the data that has been central to philosophical work is generated in unreliable contexts. Indeed, I think we often generate introspective judgments in unreliable ways (though we need not). I have raised questions about introspective data as deployed in arguments for bodily ownership in somatosensory experience and in arguments for unconscious vision. Here, I focus on a case that philosophers have confidently invoked in discussing the metaphysics of consciousness, visual blur. I will argue that introspection of blur is accurate for foveal vision. However, philosophers have used their judgments about peripheral vision to test metaphysical accounts of consciousness. For that case, I argue that we do not have good data, and that generating good data requires experimental rigor. Indeed, it might be that computational models do better than introspection in giving us tools for describing the phenomenology of peripheral vision. As part of this package, I present simple psychological models for introspection that clarify the role of attention therein.