Show simple item record

dc.contributor.advisorSymons, John
dc.contributor.authorAlvarado, Ramon
dc.date.accessioned2021-07-20T20:02:49Z
dc.date.available2021-07-20T20:02:49Z
dc.date.issued2020-05-31
dc.date.submitted2020
dc.identifier.otherhttp://dissertations.umi.com/ku:17063
dc.identifier.urihttp://hdl.handle.net/1808/31732
dc.description.abstractAlthough now ubiquitous in scientific inquiry, the role and significance of the use of computer simulations in science is still widely debated. Questions that arise in the philosophical literature on computer simulations include how computer simulations relate to other elements of scientific inquiry, what they do in relation to other methods, practices and devices, and how we can justifiably trust them. Recent debates in the epistemology of computer simulations have attempted to answer these questions by starting with an assumed dichotomy that characterizes simulations either as extensions of formal methods—given the mathematical content they manipulate and yield—or as special forms of experiments (and their associated practices) capable of yielding empirical information about the systems they simulate. In this dissertation I contend that computer simulations are neither. I argue that these views of computer simulations fail to capture the independence and hence distinctiveness of computer simulations in terms of their functions and their non-identity with the elements of scientific inquiry they are conventionally subsumed under. In other words, I will argue that computer simulations are not the formal methods that they implement, they are not the experiments they can simulate, and they are not the practices associated with them either. Rather, I argue, they are the technical artifacts, of the kind we usually denominate as instruments, with which the above tasks and practices can be carried out. This is the ‘instrument view’ of computer simulations. Philosophical discussions on the nature of computer simulations have successfully, and in my opinion correctly, moved away from a purely formal treatment of computer simulations. Computer simulations, in short, are now acknowledged to be more than the equations and theories they are based upon and/or are representative of. As I will note, these formal methods must undergo significant transformations in the construction of practical computer simulations, which generally result in a detachment of simulations from formal principles and introduces practical constraints that demand independent epistemic assessments as to their reliability. In other words, implementing the formal methods that underlie some computer simulations makes it so that these formal methods are no longer the only epistemically relevant constituents of a computer simulation. Rather, they are but one of the many considerations required to understand what a computer simulation is, how it works and when we can/should trust it in scientific contexts. Furthermore, this detachment implies that the computer simulations are not, strictly speaking, identical to the formal methods from which their construction is derived. As I will argue in detail in this dissertation, if they were, we would not need them. Understanding them and treating them purely as formal methods, or as not significantly distinct derivative processes, risks missing the broader, and more accurate, picture of their functioning and their role in scientific inquiry. The acknowledgement of extra-formal elements—that is, non-mathematical, non-logical, non-strictly-theoretical or symbolical elements— as constitutive of computer simulations has moved the current discourse on the role and nature of computer simulations in scientific inquiry towards a broader understanding of them as closer to experimental practice. The assumption being that, if they are not like their formal constituents, if they have other empirically relevant aspects, then they must be closer to the other branch of scientific inquiry: namely, experiments. Hence, throughout the first decade of the 21st century researchers have sought to understand computer simulations in light of their similarities and dissimilarities to empirical experiments. Some researchers sought to equate the epistemic status of computer simulations to that of conventional empirical experiments in virtue of the fact, they argued, that empirical—previously unknown—knowledge about the subject of inquiry can be attained through the simulation process. Some sought to exclude them as experiments due to their lack of causal intervention and lack of access to material aspects of the phenomenon under inquiry. Others tried to tie their necessary material implementation to an a posteriori status and hence to their status as something closer to experiments. However, more recent accounts within the broader view of computer simulations as experiments have reached interesting conclusions working under the assumption that computer simulations are, if not a conventionally understood empirical practice, then at least a special kind of experiment. One whose main interventions focus on and whose results derive from the manipulation of numerical values and metrics rather than the manipulation of physical phenomena itself. Both approaches— simulations as formal methods and simulations as experiments—have contributed greatly to the understanding of computer simulations and their relevant constitutive and epistemic elements: models and equations are indeed at the core of many simulation efforts; experimental aims and empirical knowledge are often the driving force behind computer simulations. Understanding both as relevant epistemic elements of computer simulations is important for understanding what computer simulations are and what they do in scientific inquiry. What these approaches miss, however, is the fact that computer simulations are neither formal methods nor experiments in and of themselves. Nor are they closer to one or the other. Rather, as I will argue, they are instruments— technical artifacts— and like other instruments, computer simulations do not fit neatly into either the category of formal methods or the category of empirical experiment because they are neither. The status, place and role of computer simulations as scientific instruments, however, is far from obvious. This is particularly due to their narrow applications in their early development as electromechanical extensions of formal analysis and solutions. But it is also due to the fact that instrument seldom figure as an independent and relevant branch of inquiry in the philosophy of science. Rather, arguments must be mustered as to how they fit this category, how they manifest their nature as instruments through the way they function in scientific inquiry and how we can sanction and rely on them, as such. This is the main goal of this dissertation. I suggest that they are technical artifacts whose functioning is distinct from that of the formal methods they emulate and of the formal content they manipulate. I suggest that they are also functionally distinct from the experiments in which they are involved. That is, they do things that neither of these elements of inquiry can do. They are, in short, the things—the technical artifacts, the instruments—with which both formal analysis and experiments can be enhanced and/or carried out. That they are instruments, however, does not settle the matter concerning their role and status in scientific inquiry, rather this fact is but the starting point. Computer simulations are also distinct from other instruments in significant ways for even within this category they elicit a form of hybridity that makes sorting them into their proper place a non-trivial manner. It is in this sense that computer simulations can be understood as novel additions to scientific inquiry. Not, as some suggest, as a sui generis way of doing science but as a special kind of instrument which brings the relationship between theoretical and empirical elements of inquiry to the fore in ways other instruments have not. In this dissertation I will also consider an important objection to my view of computer simulations as an instrument. This objection comes from recent developments in the philosophy of computer simulation and takes the form of an argument—which I call the ‘argument from heterogeneity’— in which it is implied that given the many and different stages required to construct a computer simulation, the many and different fields of expertise involved in their construction, the many and different components that constitute them, as well as the many and different domains in which computer simulations contribute a significant disruption, they cannot be understood through such a limited concept as ‘tool’ or ‘instrument’. Rather, the argument goes, computer simulations ought to be understood as a practice akin to engineering or medicine, a separate heterogeneous set of skills, aims, communities, etc., that together may well constitute a novel way of doing science. In this light, the notion of ‘instrument’ seems particularly limited. In this dissertation I show that this is not the case and that in fact definitive exemplars of instruments are the products of the same kind of heterogeneity, reach and disruptive power that this camp thinks beyond the reach of such kind of objects. To exclude computer simulations from being understood as instruments solely on the basis of the heterogeneity argument would mean that paradigmatic instances of instruments, such as the telescope, would have to be excluded too. This would be an undesirable consequence of this position. But furthermore, I show that instruments and computer simulations in particular can be heterogeneous in all the ways discussed. In short, I show that the argument from heterogeneity is not a sufficiently strong argument to dismiss the instrument view of computer simulations. As I will explain in detail below, the shift towards understanding computer simulations as more closely related to experiments and away from seeing them as mere extensions to formal methods has in the last few years gone beyond making direct analogies and comparisons to experiments themselves. Early in the move in this direction, the main strategy philosophers used was to provide examples in which computer simulations could function like experiments themselves and make the case that computer simulations could in fact provide empirical knowledge of the subject of inquiry. Besides the obvious objections from those that consider them mere extensions to formal a priori methods, one of the main obstacles to this view of computer simulations is the fact that computer simulations do not seem to have direct material interactions with the phenomenon of interest. So, philosophers would point to the necessary material implementation of computer simulations as well as to the similarity relation between the simulation specifications, its mathematical structure and that of the phenomenon of interest in order to satisfy this condition in relation to similarity. As I will argue in this dissertation, what is wrong with these strategies, however, is that in their own way they each fail to properly capture the artifactual nature of computer simulations. As I will show here, while it is true that computer simulations are not mere extensions of formal methods and it is true that the work they do is done with the essential inclusion of extra-formal (mathematical, theoretical, logical, etc.) elements, this is not because they are like broader experimental practices, but rather because they are instruments: technical artifacts designed, developed and deployed in scientific inquiry. Finally, I argue that understanding computer simulations as instruments has significant repercussions for the epistemology of computer simulations. In particular, I argue that—contrary to recent approaches in the epistemology of computer simulations—computer simulations, like other instruments, cannot be deemed trustworthy merely on the basis of non-evidential warrants such as epistemic entitlements—epistemic warrants which give one the right to hold a belief to be true without thorough epistemic diligence in the absence of reasonable doubt. Epistemic entitlements are the kind of epistemic warrants that are deployed to ground the pragmatic constraints and considerations in every day epistemic practices. One may say for example, that we are entitled to believe the person driving on the intersecting road ahead of us will stop for their turn if the light is red for them and green for us. We are entitled to expect this without knowing much about that particular person, their car, or that specific stoplight. If there was an accident involving a car that ran a red light in this context and someone asked us why did we chose to accelerate upon our turn on a green light, we can reasonable say that we believed the car whose light was red would stop. Without an appeal to evidence to support our belief, we can reasonably state that we were entitled to believe that they would stop. This is particularly so in the case of absence of evidence to the contrary. If we saw erratic behavior from the incoming car, if we knew they were coming out of a bar, if there was a detectable speed factor that would provide evidence to the contrary of the assumption, then this entitlement would be broken. The gist of this epistemic position is that in the absence of such evidence, we are not to burden the epistemic agent with any further justificatory epistemic requirements such as evidential warrants (explicit justification, proof, empirical access, etc.) to back up their reliance on the way stoplights and drivers are supposed to work at an intersection. In this dissertation I argue that Burge-style epistemic entitlements such as the ones described above have never been adequate in the inclusion of technical artifacts into the epistemically elevated status of scientific instruments. I conclude this dissertation by contending that in the case of our use and our justification for our use of computer simulations in scientific inquiry, we must distinguish our own reasons for accepting scientific claims/results/artifacts from the kinds of considerations that a group of oracle worshippers would deploy as they themselves attempt to justify the use of their similarly epistemically opaque device. The superior epistemic norms of scientific inquiry must be upheld as we consider the status of computer simulations. As an instrument, rather than an experiment, a practice, or a method, computer simulations must be assessed against such demanding norms and standards. Computer simulations can serve as scientific instruments only in those circumstances in which they are designed, developed and deployed in adherence to strong theoretical principles, in which properly curated data is used for their implementation and where proper empirical evidence backs their results. Hence, their widespread usage may not be as easily justified.
dc.format.extent259 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectPhilosophy
dc.subjectPhilosophy of science
dc.subjectEpistemology
dc.subjectComputational methods
dc.subjectComputer
dc.subjectEpistemology
dc.subjectScience
dc.subjectScientific Instruments
dc.subjectSimulation
dc.titleComputer Simulations as Scientific Instruments
dc.typeDissertation
dc.contributor.cmtememberSchulz, Armin
dc.contributor.cmtememberMaley, Corey
dc.contributor.cmtememberAlexander, Perry
dc.contributor.cmtememberHumphreys, Paul
dc.thesis.degreeDisciplinePhilosophy
dc.thesis.degreeLevelPh.D.
dc.identifier.orcidhttps://orcid.org/0000-0003-0113-4812en_US
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record