PSA2014 Contributed Paper Abstracts
The following are the titles and abstracts of accepted contributed papers as there were originally submitted, listed alphabetically by the first author’s last name.
Abrams, Marshall: Coherence, Muller's Ratchet, and the Maintenance of Culture
I investigate the structure of an argument that culture cannot be maintained in a population if each individual acquires each cultural variant from a single person. I note two puzzling consequences of the argument. I resolve the first by showing that one of the models central to the argument is conceptually analogous and mathematically equivalent to a model used to investigate the biological evolution of sexual reproduction. I resolve the second by arguing that probabilistic models of epistemological coherence can be reinterpreted as models of mutual support between cultural variants. I develop a model of cultural transmission illustrating this idea.
Akagi, Mikio: Going against the Grain: Functionalism and Generalization in Cognitive Science
Functionalism is widely regarded as the central doctrine in the philosophy of cognitive science, and is invoked by philosophers of cognitive science to settle disputes over methodology and other puzzles. I describe a recent dispute over extended cognition in which many commentators appeal to functionalism. I then raise an objection to functionalism as it figures in this dispute, targeting the assumption that generality and abstraction are tightly correlated. Finally, I argue that the new mechanist framework offers more realistic resources for understanding cognitive science, and hence is a better source of appeal for resolving disagreement in philosophy of science.
Alexander J.: Cheap Talk, Reinforcement Learning and the Emergence of Cooperation
Cheap talk has often been thought incapable of supporting the emergence of cooperation because costless signals, easily faked, are unlikely to be reliable (Zahavi and Zahavi, 1997). I show how, in a social network model of cheap talk with reinforcement learning, cheap talk does enable the emergence of cooperation, provided that individuals also temporally discount the past. This establishes one mechanism that suffices for moving a population of initially uncooperative individuals to a state of mutually beneficial cooperation even in the absence of formal institutions.
Autzen, Bengt: The Star Tree Paradox in Bayesian Phylogenetics
The 'star tree paradox' in Bayesian phylogenetics refers to the phenomenon that a particular binary phylogenetic tree sometimes has a very high posterior probability even though a star tree generates the data. In this paper I discuss two proposals of how to solve the star tree paradox. In particular, I defend Lewis, Holder, and Holsinger's polytomy prior against some objections found in the biological literature and argue that is preferable to Yang's data-size dependent prior from a methodological perspective.
Baetu, Tudor: The Completeness of Mechanistic Explanations
The purpose of the paper is to provide methodological guidelines for evaluating mechanistic explanations meant to complement previously elaborated interventionist norms. According to current accounts, a satisfactory mechanistic explanation should include all of the relevant features of the mechanism, its component entities and activities, their properties and their organization, as well as exhibit productive continuity. It is not specified, however, how this kind of mechanistic completeness can be demonstrated. I argue that parameter sufficiency inferences based on mathematical model simulations of known mechanisms is used to determine whether a mechanism capable of producing the phenomenon of interest can be constructed from mechanistic components organized, acting, and having the properties described in the mechanistic explanation.
Bain, Jonathan: Pragmatists and Purists on CPT Invariance in Relativistic Quantum Field Theories
Greenberg(2002) claims that the violation of CPT invariance in an interacting RQFT entails the violation of Lorentz invariance. This claim is surprising since standard proofs of the CPT theorem require more assumptions than Lorentz invariance, and are restricted to non-interacting, or at best, unrealistic interacting theories. This essay analyzes Greenberg's claim in the context of the debate between pragmatist approaches to RQFTs, which trade mathematical rigor for the ability to derive predictions from realistic interacting theories, and purist approaches, which trade the ability to formulate realistic interacting theories for mathematical rigor.
Banville, Frédéric-I: Accounting for the Dynamics of Inquiry in Neuroscience
In this paper I demonstrate that Bechtel and Richardson's (2010) account of heuristics in science is too narrow to capture the range of problems involved in inquiry. This is a problem insofar as Bechtel and Richardson are concerned with descriptive accuracy. Recognizing distinct kinds of problems and heuristics enables an appreciation of the dynamics of scientific inquiry that is more descriptively accurate. Through a case study of the discovery of place cells (O'Keefe and Dostrovsky 1971) and the general framework that made this discovery possible (Tolman 1948), I show that conceptual problems determine which empirical problems constitute research agendas.
Barwich, Ann-Sophie: A Fine Nose for Timeliness: The Discovery of the Olfactory Receptors and the Question of Novelty
What characterizes novelty in a scientific discovery? I present a case study that has not been dealt with in philosophical debate and revisit the notion of scientific discovery. Analyzing the historical trajectory that led to the discovery of the olfactory receptors, and focusing on the unprecedented application of degenerate Polymerase Chain Reaction, I develop an alternative notion of discovery that pertains to the development of laboratory practices. It concerns the ways in which advancing techniques become integrated into an evolving experimental context by bridging the gap between the requirements of standardization and the idiosyncrasy of research materials.
Bechtel, William: Biological Mechanisms Don't Exist Except as Theoretical Posits
I argue that biological mechanisms exist as constitutionally and dynamically distinct entities only as posited in mechanistic explanations. Evidence is growing that what are taken to be constituents of biological mechanisms often interact as much with entities outside the putative mechanism as with each other. Moreover, the activities of a mechanism are often modulated by its own activity in the distant past. Some would take these findings as an impetus to holism and abandonment of the pursuit of mechanistic explanation. I argue instead for a more nuanced understanding of mechanistic explanation, in which the delineation of mechanisms is recognized as based on the epistemic aims of scientists.
Benétreau-Dupin, Yann: Blurring Out Cosmic Puzzles
The Doomsday argument and anthropic arguments are illustrations of a paradox. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, the paradox constitutes a challenge to Bayesianism. Several attempts, some successful, have been made to avoid these conclusions, but some versions of the paradox cannot be dissolved within the framework of orthodox Bayesianism. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of ignorance in Bayesian reasoning, and explains away these puzzles.
Bewersdorf, Benjamin: Total Evidence, Uncertainty and A Priori Beliefs
Defining the rational belief state of an agent in terms of an a priori, hypothetical or initial belief state as well as the agent's total evidence can help to address a number of interesting philosophical problems. In this paper, I discuss how this strategy can be applied to cases in which evidence is uncertain. I also argue that taking evidence to be uncertain allows us to uniquely determine the subjective a priori belief state of an agent from her present belief state and her total evidence, given that evidence is understood in terms of update factors.
Bigaj, Tomasz: Quantum particles, individual properties, and discernibility
The paper discusses how to formally represent properties characterizing individual components of quantum systems containing many particles of the same type. It is argued that this can be done using only fully symmetric projection operators. An appropriate interpretation is proposed and scrutinized, and its consequences related to the notion of quantum entanglement and the issue of discernibility and individuality of quantum particles are ascertained.
Bokulich, Alisa: Frankenmodels, Or a Cautionary Tale of Coupled Models in the Earth Sciences
In recent decades a new breed of simulation models has emerged in the Earth sciences known as coupled models, which involve a suite of independently developed component modules integrated in a software framework. Such models purport to offer 'plug and play' capabilities, allowing researchers to easily combine and swap out different component models in order to build larger, more complex models and facilitate inter-model comparisons. I examine such coupled models in the context of geomorphology and argue that although advances in software programming mean these models can be coupled from a 'technological' standpoint, they are not typically adequately coupled from a scientific standpoint, leading to what I call 'Frankenmodels.' I highlight a number of conceptual challenges that the coupled-model approach in geomorphology will need to overcome in order to succeed.
Brigandt, Ingo: Social Values Influence the Adequacy Conditions of Scientific Theories: Beyond Inductive Risk
Many philosophers who maintain that social and other non-epistemic values may influence theory acceptance do so based on the idea is that when the social consequences of erroneously accepting a theory would be severe, a higher evidential threshold has to obtain. While an implication of this position is that increasing evidence makes the impact of social values converge to zero, I argue for a stronger role for social values, according to which social values (together with epistemic values) determine a theory's conditions of adequacy, e.g., what makes a scientific account complete and unbiased.
Brown, Matthew, and Havstad, Joyce: The Disconnect Problem in Science and Policy
We diagnose a new problem for philosophers of science to engage with: the disconnect problem. Instances of the disconnect problem arise wherever there is ongoing and severe discordance between the scientific assessment of a politically relevant issue, and the politics and legislation of said issue. Issues which currently display a persistent disconnect problem include both biological education about evolution and anthropogenic global climate change. Here we use the case of climate change first to diagnose the disconnect problem -- uniting scattered philosophical critiques of various science-policy interactions in the process -- and then to solve it, by proposing an alternative framework for thinking about science-policy interaction. We sketch this framework, which draws on feminist and pragmatist sources, in general and in application to the increasingly serious problem of climate change.
Bueno, Otavio: What Does a Mathematical Proof Really Prove?
Jody Azzouni (, , and ) has argued that underlying the practice of creating mathematical proofs there is a very specific norm: to each proof there should be a corresponding algorithmic derivation, a derivation in an algorithmic system. In this paper, I take issue with this proposal, and provide a framework to classify and assess mathematical proofs. I argue that there is a plurality of kinds of proofs in mathematics and a plurality of roles these proofs play. In the end, mathematical practice is far less unified than Azzouni's view recommends.
Bueter, Anke: Is it Time for an Etiological Revolution in Psychiatric Classification?
Current psychiatric classification (as exemplified by the DSMs) is based on phenomenology - a fact that many critics hold accountable for its problems, such as heterogeneity, comorbidity, or lack of predictive success. Therefore, these critics urge that it is time for psychiatric classification to move on to an etiology-based system. Yet, most of the arguments brought forward for such a change rely on unwarranted epistemological, ontological, or empirical assumptions. Others raise valid points about problems with the current DSM classification; however, those more successful arguments do not establish the legitimacy of an etiological revolution either, but instead call for greater pluralism.
Callender, Craig, and Wüthrich, Christian: What Becomes of a Causal Set
Contemporary physics is notoriously hostile to an A-theoretic metaphysics of time. A recent approach to quantum gravity promises to reverse that verdict: advocates of causal set theory have argued that their framework is at least consistent with a fundamental notion of `becoming'. In this paper, after presenting some new twists and challenges, we show that a novel and exotic notion of becoming is compatible with causal sets.
Cao, Rosa: Where Information Fades Away: Some Limitations of Informational Explanations in Neuroscience
Should the ubiquitous talk of information and coding in neuroscience be taken at face value? I will examine cases in sensory neurophysiology where spike trains in single neurons, or spiking activity in small groups of neurons are said to encode information about distal stimuli. Given the explanatory goals in those cases, I will argue that ascribing informational content to patterns of spike activity in these cases is not doing explanatory work - spikes carry information for the scientist, rather than for the system itself. The activity of large populations of cells, by contrast, are better candidates for informational ascriptions.
Chambliss, Bryan: Optimality and Bayesian Perceptual Systems
Bayesian models of perception are criticized, ranging from being untestable (because unrelated to neurophysiology), to being false (because committed to optimal behavior). Consequently, Bayesian models seem futile if understood as being more than merely predictive. Careful inspection of the explanatory purport of Bayesian models splits this dilemma. Structural similarities between Bayesian models of perception and optimality modeling in biology suggest that we export lessons concerning biological optimality models to the context of Bayesian modeling. Upon properly appreciating (a) the operative level of analysis and (b) the explanatory force of Bayesian models, we see their explanatory purport and understand their limitations.
Clatterbuck, Hayley: Contingency and the Origin of Life
Recently, several philosophers of science (White 2007, Nagel 2012) have argued that naturalistic scientific theories cannot adequately explain the origin of life because they show that life's emergence was highly contingent on precise and improbable initial conditions. Bayesian confirmation theory offers an analysis of when and how contingency presents impediments to scientific explanation. These impediments can be overcome, so non-contingency is not a necessary condition for good scientific explanation. Further, alternative, non-naturalistic theories do no better than naturalistic theories in securing the virtues of non-contingent explanations when applied to the case of life's origin.
Coffey, Kevin: Quine-Duhem through a Bayesian Lens
One virtue attributed to Bayesian confirmation theory is its ability to solve the Quine-Duhem problem---the problem of distributing blame between conjuncts in light of evidence refuting their conjunction. By requiring rational agents to update their (partial) beliefs by conditionalizing on new evidence, Bayesianism seems to provide a well-defined method for assessing the impact of empirical evidence on our beliefs. Recently, however, Michael Strevens has criticized the standard Bayesian solution. By appealing to the concept of a Newstein effect, he argues that the standard `blame measure' fails to be sufficiently objective and to accurately reflect the impact of a conjunction's falsification on its individual conjuncts. Strevens then proposes an alternative Bayesian solution that purports to avoid these difficulties. This paper assesses Strevens' claims in light of a more detailed analysis of Newstein effects. I conclude that (1) his alternative blame measure is equally susceptible to Newstein effects, but (2) Newstein effects are actually unproblematic features of Bayesian epistemology, already implicit in the Bayesian framework, and (3) Strevens' alternative blame measure fails as a general solution to the Quine-Duhem problem. The upshot is that the standard Bayesian solution to the Quine-Duhem problem is superior to Strevens' alternative.
Cordero-Lecca, Alberto: Where's the Beef? Selective Realism and Truth-Content Identification
Selective realism seeks to identify theory-parts with high truth-content. One recent strategy, led by Juha Saatsi, Peter Vickers, and Ioannis Votsis focuses on the analysis of derivations of impressive predictions from a theory. This line greatly clarifies and refines the notions of "theory-part" and "truth-content ," but also leads to a "bare-bones" version of realism that stops surprisingly short of target. Sections 1 and 2 discuss the derivational approach in progress. Sections 3 and 4 trace the noted shortcoming to an excessive concentration on single-case derivations and minimalist interpretation. Sections 5 and 6 suggest adjustments that keep the focus on truth-content but expand the assessment of theory-parts to include both their overall track-record and external support for them. The resulting criterion, I argue, yields a "beefier" version of selective realism that is reasonably in tune with the array of theory-parts deemed successful and beyond reasonable doubt (based on standard confirmational practices in the natural sciences).
Cuffaro, Michael: How-Possibly Explanations in Quantum Computer Science
A primary goal of quantum computer science is to find an explanation for the fact that quantum computers are more powerful than classical computers. In this paper I argue that to answer this question is to compare algorithmic processes of various kinds, and in so doing to describe the possibility spaces associated with these processes. By doing this we explain how it is possible for one process to outperform its rival. Further, in this and similar examples little is gained in subsequently asking a how-actually question. Once one has explained how-possibly there is little left to do.
Danks, David: The Mathematics of Causal Capacities
Models based on causal capacities, or independent causal influences/mechanisms, are widespread in the sciences. This paper develops a natural mathematical framework for representing such capacities by extending and generalizing previous results in cognitive psychology and machine learning, based on observations and arguments from prior philosophical debates. In addition to its substantial generality, the resulting framework provides a theoretical unification of the widely-used noisy-OR/AND and linear models, thereby showing how they are complementary rather than competing. This unification helps to explain many of the shared cognitive and mathematical properties of those models.
Del Pinal, Guillermo, and Nathan, Marco: Bridge Laws and the Psycho-Neural Interface
Recent advancements in the brain sciences have enabled researchers to determine, with increasing accuracy, patterns and locations of neural activation. These findings have revived a longstanding debate regarding the relation between scientific fields: while many authors now claim that neuroscientific data can be used to advance our theories of higher cognition, others defend the so-called `autonomy' of psychology. Settling this question requires understanding the nature of the bridge laws used at the psycho-neural interface. While bridge laws have been the topic of longstanding philosophical discussions, philosophers have mostly focused on a particular type of bridge laws, namely, reductive bridge laws. The aim of this article is to present and provide a systematic analysis of links of a different kind---associative bridge laws---that play a central role in current scientific practice, but whose role has been often overlooked in methodological and metaphysical discussions in philosophy of mind and science.
Dougherty, John: A few points on gunky space
Arntzenius (2012) proposed a mathematical model for gunky space, motivated in part by philosophical considerations, in part by considerations from quantum mechanics. After recasting this model in perspicuous terms, I answer an unresolved technical question. Turning to quantum mechanics, I argue that the motivations for this model disregard important parts of the quantum formalism: the observables. When these are taken into account, gunk appears incompatible with quantum mechanics. This conclusion poses a dilemma for the reconciliation of gunk with modern physics. Either one must reject the physical possibility of gunk, or one must find a systematic way of eliminating points from physics.
Duwell, Armond, and Le Bihan, Soazig: Enlightening falsehoods: A modal view of scientific understanding
A new concept of understanding is articulated, modal understanding, which is characterized as follows: one has some modal understanding of some phenomena if and only if one knows how to navigate some of the possibility space associated with these phenomena, where by ``possibility space" it is meant the set of possible dependency structures that give rise to all subsets of the phenomena and the relations between those structures. When fully articulated, the notion of modal understanding serves as a suitable concept of understanding that is appropriately neutral with respect to the debate on scientific realism and helps explain modeling practices of scientists."
Feest, Uljana: Physicalism, Introspection, and Psychophysics: The Carnap/Duncker Exchange
In 1932, Rudolf Carnap published his article "Psychology in a Physical Language." The article prompted a critical response by the Gestalt psychologist Karl Duncker. The exchange is marked by mutual lack of comprehension. In this paper I will provide a contextualized explication of the exchange. I will show that Carnap's physicalism was deeply rooted in the psychophysical tradition that also informed Gestalt psychological research. By failing to acknowledge this, Carnap missed out on the possibility to enter into a serious debate and to forge an alliance with a like-minded psychologist at the time.
Fenton-Glynn, Luke: Ceteris Paribus Laws and Minutis Rectis Laws
Special science generalizations admit of exceptions. Among the class of non-exceptionless special science generalizations, I distinguish (what I will call) *minutis rectis* (*mr*) generalizations from the more familiar category of *ceteris paribus* (*cp*) generalizations. I argue that the challenges involved in showing that *mr* generalizations can play the law role are underappreciated, and quite different from those involved in showing that *cp* generalizations can do so. I outline some potential strategies for meeting the challenges posed by *mr* generalizations.
Fillion, Nicolas, and Bangu, Sorin: Solutions in the Mathematical Sciences & Epistemic Hierarchies
Modern mathematical sciences are hard to imagine without the involvement of computers, or more generally, without appeal to numerical methods. Interesting conceptual problems arise from this interaction, and yet philosophers of science have yet to catch up with these developments. This paper sketches and examines one such problem, a tension between two types of epistemic contexts, one in which exact solutions can be found, and one in which they can't. Against this background, an investigation of some intriguing computational (a)symmetries is undertaken.
Forster, Malcolm: How the Quantum Sorites Phenomenon Strengthens Bell's Argument
A recent theorem by Colbeck and Renner (2011) leads to a significantly stronger theorem than Bell's famous theorem; the new theorem does not assume Outcome Independence. It is suggested that the reason that the stronger theorem was not discovered earlier is that it exploits a very strong prediction of quantum mechanics by applying some mathematical "tricks" to bypass the need to use Outcome Independence. The very strong prediction of quantum mechanics is aptly described as the "quantum Sorites" phenomenon.
Frost-Arnold, Greg: Should a Historically Motivated Anti-Realist be a Stanfordite?
Suppose one holds that the historical record of discarded scientific theories provides good evidence against scientific realism. Should one adopt Kyle Stanford's specific critique of realism? I present reasons for answering this question in the negative. In particular, Stanford's challenge, based on the problem of unconceived alternatives, cannot use many of the prima facie strongest pieces of historical evidence against realism: (i) superseded theories whose successors were explicitly conceived, and (ii) superseded theories that were not the result of elimination-of-alternatives inferences.
Fuller, Jonathan: The Confounding Question of Confounding Causes in Randomized Trials
Comparative group studies, including randomized trials, control for confounding variables. Some epidemiologists praise randomized trials for controlling for all confounding causes, while some philosophers praise the assumption that all confounding causes are controlled for supporting sound causal inference. Both views are problematic. Exposing the problems clears the way for an alternate assumption that can better guide causal inference in group studies.
Fumagalli, Roberto: No Learning from Minimal Models
This paper examines the issue whether consideration of so-called minimal models can prompt learning about real-world targets. Using a widely cited example as a test case, I argue against the increasingly popular view that consideration of minimal models can prompt learning about such targets. In particular, I criticize the proponents of this view for failing to explicate in virtue of what properties or features minimal models supposedly prompt learning and for substantially overstating the epistemic import of minimal models. I then consider and rebut three arguments one might develop to defend the claim that consideration of minimal models can prompt learning about real-world targets. In doing so, I illustrate the implications of my critique for the wider debate on the epistemology of scientific modelling.
Gandenberger, Greg: Why I Am Not a Methodological Likelihoodist
Methodological likelihoodism is the view that it is possible to provide an adequate self-contained methodology for science on the basis of likelihood functions alone. I argue that methodological likelihoodism is false because an adequate self-contained methodology for science provides good norms of commitment vis-a-vis hypotheses and no purely likelihood-based norm meets this standard.
Garson, Justin: Why (a Form of) Function Indeterminacy is Still a Problem for Biomedicine, and How Seeing Functional Items as Components of Mechanisms Can Solve it
During the 1990s, many philosophers wrestled with the problem of function indeterminacy. Although interest in the problem has waned, I argue that solving the problem is of value for biomedical research and practice. This is because a solution to the problem is required in order to specify rigorously the conditions under which a given item is "dysfunctional." In the following I revisit a solution developed originally by Neander (1995), which uses functional analysis to solve the problem. I situate her solution in the framework of mechanistic explanation and suggest two improvements.
Genin, Konstantin, Kelly, Kevin, and Lin, Hanti: A Topological Theory of Empirical Simplicity
We propose an account of empirical simplicity relative to an empirical problem, which specifies a question to be answered and the potential information states the scientist might encounter. The motivating idea is that empirical simplicity reflects iterated empirical underdetermination in the given problem, which agrees closely with Popper's ideas about falsifiability. The idea is to collapse distinctions in the problem by means of structure-preserving transformations until the simplicity order emerges. Such a simplicity concept serves as a genuinely epistemic road map for a learner seeking the true answer to an empirical problem.
Gomori, Marton, and Szabo, Laszlo: How to Move an Electromagnetic Field?
The special relativity principle presupposes that the states of the physical system concerned can be meaningfully characterized, at least locally, as such in which the system is at rest or in motion with some velocity relative to an arbitrary frame of reference. In the first part of the paper we show that electrodynamic systems, in general, do not satisfy this condition. In the second part of the paper we argue that exatly the same condition serves as a necessary condition for the persistence of an extended physical object. As a consequence, we argue, electromagnetic field strengths cannot be the individuating properties of electromagnetic field---contrary to the standard realistic interpretation of CED. In other words, CED is ontologically incomplete.
Grüne-Yanoff, Till: Why Behavioural Policy Needs Mechanistic Evidence
Proponents seek to justify behavioural policies as "evidence-based". Yet they typically fail to show through which mechanisms these policies operate. This paper shows - at the hand of examples from economics, psychology and biology - that without sufficient mechanistic evidence, one often cannot determine whether a given policy in its target environment will be efficacious, robust, persistent or welfare-improving. Because these properties are important for justification, policies that lack support from mechanistic evidence should not be called "evidence-based".
Harinen, Totte: Normal Causes for Normal Effects
Halpern and Hitchcock have used normality considerations in order to provide an analysis of actual causation. Their methodology is that of taking a set of causal scenarios and showing how their account of actual causation accords with typical judgments about those scenarios. Consequently, Halpern and Hitchcock have recently demonstrated that their theory deals with an impressive number of problem cases discussed in the literature. However, in this paper I first show that the way in which they rule out certain cases of bogus prevention leaves their account susceptible to counterexamples. I then sketch an alternative approach to prevention scenarios which draws on the observation that, in addition to abnormal causes, people usually tend to focus on abnormal effects.
Hartmann, Stephan: A New Solution to the Problem of Old Evidence
The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have issues and none of them is considered to be the definite solution. In this short note, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.
Heilmann, Conrad: A New Interpretation of the Representational Theory of Measurement
On the received interpretation, the Representational Theory of Measurement (RTM) depicts measurement as numerical representation of empirical relations. This account of measurement has been widely criticised. In this paper, I provide a new interpretation of RTM that sidesteps these criticisms. Instead of assessing it as a candidate for a full-fledged theory of measurement, I propose to view RTM as a library of theorems that investigate the numerical representability of qualitative relations. Such theorems are useful tools for concept formation for a broad range of important applications in linguistics, rational choice, metaphysics, economics, and the social sciences.
Henderson, Leah: Should the debate over scientific realism go local?
There have been suggestions that 'going local' may be the most productive approach for one or both sides in the debate over scientific realism. I distinguish between two different things that might be meant by going local. I argue that in neither case is it clear that localizing is either necessary or helpful. In fact, it may even be a distraction from a proper investigation of the empirical question about the history of science which emerges from the traditional debate.
Hey, Spencer: Theory Testing and Implication in Clinical Trials
John Worrall (2010) and Nancy Cartwright (2011) argue that randomized controlled trials (RCTs) are "testing the wrong theory." RCTs are designed to test inferences about the causal relationships in the study population, but this does not guarantee a justified inference about the causal relationships in the more diverse population in clinical practice. In this essay, I argue that the epistemology of theory testing in trials is more complicated than either Worrall's or Cartwright's accounts suggest. I illustrate this more complex theoretical structure with case-studies in medical theory testing from (1) Alzheimer's research and (2) anti-cancer drugs in personalized medicine.
Hicks, Michael: Solving the Coordination Problem
Discussions of the relationship between physics and the special sciences focus on the question of which sciences reduce to physics and how reduction is to be understood. This focus distracts from crucial featues of the relationship between sciences. I identify four features of the relationship between physics and the special sciences and argue that no current view of their relationship adequately explains these four features. I then present a new view and argue that it explains these aspects of the relationship between distinct sciences.
Hicks, Daniel: Genetically Modified Crops and the Underdetermination of Evidence by Epistemology
The underdetermination of theory by evidence has been discussed widely. In this paper, I discuss a distinct kind of underdetermination: the underdetermination of evidence by epistemology. I examine the controversy over the yields of genetically modified [GM] crops, and show that proponents and opponents of GM crops cite, as evidence, two rival sets of claims. By the lights of what I call classical experimental epistemology, one set of claims counts as evidence, but the other does not. However, Nancy Cartwright's evidence for use arrives at exactly the opposite conclusion: the latter counts as evidence, and the former does not.
Hofer-Szabó, Gábor, and Vecsernyés, Péter: Bell's local causality for philosophers
This paper is the philosopher-friendly version of our more technical work ( ). It aims to give a clear-cut definition of Bell's notion of local causality. Having provided a framework, called local physical theory, which integrates probabilistic and spatiotemporal concepts, we formulate the notion of local causality and relate it to other locality and causality concepts. Then we compare Bell's local causality with Reichenbach's Common Cause Principle and relate both to the Bell inequalities. We find a nice parallelism: both local causality and the Common Cause Principle are more general notions than captured by the Bell inequalities. Namely, Bell inequalities cannot be derived neither from local causality nor from a common cause unless the local physical theory is classical or the common cause is commuting, respectively.
Holman , Bennett: Why Most Sugar Pills are not Placebos
The standard philosophic definition of placebos offered by GrÃ¼nbaum is incompatible with the experimental role they must play in randomized clinical trials as articulated by Cartwright. I offer a modified account of placebos that respects this role and clarifies why many current medical trials fail to warrant the conclusions they are typically seen as yielding. I then consider recent changes to guidelines for reporting medical trials and show that pessimism over parsing out the cause "unblinding" is premature. Specifically, using a trial of antidepressants, I show how more sophisticated statistical analyses can parse out the source of such effects and serve as an alternative to placebo control.
Holman, Bennett, and Bruner, Justin: The Problem of Intransigently Biased Agents
In recent years the social nature of scientific inquiry has generated considerable interest. We examine the effect an epistemically impure agent on a community of honest truth-seekers. Extending a formal model of network epistemology pioneered by Zollman, we conclude that an intransigently biased agent prevents the community from ever converging to the truth. We explore two solutions to this problem, including a novel procedure for endogenous network formation in which agents choose who to trust. We contend our model nicely captures aspects of current problems in medical research, and gesture at some morals for medical epistemology more generally.
Holter, Brandon: Rudner's Challenge and the Epistemic Significance of Inductive Risk
Richard Rudner argues that the traditional view of scientific justification, which is supposed to be free of moral and practical influence, cannot explain why different scientific contexts require different standards of sufficient evidence. Traditionalists distinguish the epistemic justification of theories from the practical justification of actions, arguing that Rudner's examples are merely practical. This response is inadequate, however, because it does not secure a value-free epistemology. It neither explains epistemic variation nor asserts that there is one universal epistemic standard of evidential sufficiency. Only the epistemic influence of values, I argue, can account for the evidential standards of science.
Howick, Jeremy , and Worrall, John: What counts as a placebo is relative to a target disorder and therapeutic theory: defending a modified version of Grunbaum's scheme
There is currently no widely accepted definition of 'placebos'. Yet debates about the ethics of placebo use (in routine practice or clinical trials) and the magnitude (if any!) of 'placebo' effects continue to rage. Even if not formally required, a definition of the 'placebo' concept could inform these debates. Grunbaum's 1981/1986 characterization of the 'placebo' has been cited as the best attempt thus far, but has not been widely accepted. Here we argue that criticisms of Grunbaum's scheme are either exaggerated, unfounded or based on misunderstandings. We propose that, with three modifications, Grunbaum's scheme can be defended. GrÃ¼nbaum argues that all interventions can be classified by a therapeutic theory into 'incidental' and 'characteristic' features. 'Placebos', then, are treatments whose characteristic features do not have effects on the target disorder. To GrÃ¼nbaum whether a treatment counts as a placebo or not is relative to a target disorder, and a therapeutic theory. We modify Grunbaum's scheme in the following way. First, we add 'harmful intervention' and 'nocebo' categories; second, we insist that what counts as a 'placebo' (or nonplacebo) be relativized to patients; and third, we issue a clarification about the overall classification of an intervention. We argue that our modified version of Grunbaum's scheme resists published criticisms. Our work warrants a re-examination of current policies of the ethics of placebos in both clinical practice and clinical trials, and a revised empirical estimation of 'placebo' effects, both in the context of clinical trials and clinical practice.
Huggett, Nick, and Vistarini, Tiziana: Deriving General Relativity from String Theory
Weyl symmetry of the classical bosonic string is a mathematical consequence of its Lagrangian. However, quantization breaks it with profound consequences that we describe here, along with a relatively straight-forward review of string theory, suitable for philosophers of physics. First, reimposing the symmetry requires that spacetime have 26 dimensions. Moreover it requires that the background spacetime satisfy the equations of general relativity; it follows in turn that general relativity, hence classical spacetime as we know it, arises from string theory. In conclusion we argue that Weyl symmetry is not an independent postulate, but required in quantum string theory.
Imbert. Cyrille: Getting the advantages of theft without honest toil: Realism about the complexity of (some) physical systems without realist commitments to their scientific representations
This paper shows that, under certain reasonable conditions, if the investigation of the behavior of a physical system is difficult, no scientific change can make it significantly easier. This impossibility result implies that complexity is then a necessary feature of models which truly represent the target system and of all models which are rich enough to catch its behavior and therefore that it is an inevitable element of any possible science in which this behavior is accounted for. I finally argue that complexity can then be seen as representing an intrinsic feature of the system itself.
Jansson, Lina: Making Room For Explanatory Fictions Within Realism
There have been several recent challenges to the idea that accurate representation is a necessary condition for successful model explanation. In this paper I look in detail at the case of wavefunction scarring that Bokulich presents. I argue that cases like this one push us towards recognising ways of mediating between what is being explained and what is doing the explaining that depart radically from the traditional accounts. However, the commitment to accurate representation of what is doing the explaining and what is being explained remains unshaken by these examples.
Jantzen, Benjamin: Why Talk about 'Non-Individuals' Is Meaningless
It has been suggested that puzzles in the interpretation of quantum mechanics motivate consideration of non-individuals, entities that are numerically distinct but do not stand in a relation of identity with themselves or non-identity with others. I argue that talk about non-individuals is either meaningless or not about non-individuals. It is meaningless insofar as we attempt to take the foregoing characterization literally. It is meaningful, however, if talk about non-individuals is taken as elliptical for either nominal or predicative use of a special class of mass-terms.
Kao, Molly: Unification in the Old Quantum Theory
In this paper, I consider some of the central developments of old quantum theory between the years 1900 and 1913 and provide an analysis of the nature of the unificatory power of the hypothesis of quantization of energy in a Bayesian framework. I argue that the best way to understand the unification here is in terms of informational relevance: on the assumption of the quantum hypothesis, phenomena that were previously thought to be unrelated turned out to yield information about one another based on agreeing measurements of the numerical value of Planck's constant.
Kennedy, Ashley, and Jebeile, Julie: Idealization in the Process of Model Explanation
In this paper we argue that model explanation is a process in which idealization plays an important and active role. Via examination of a case study from contemporary astrophysics, we show that a) idealizations can, in some cases, make for better model explanations and that b) they do this by creating comparison cases which serve to highlight important explanatory components in the modeled target system. Thus our view is that the role of idealization in scientific model explanation goes beyond both simplification and isolation, which are the roles that have been previously described in the literature.
Keren, Arnon: Science and Informed, Counterfactual, Democratic Consent
On many science-related policy questions, the public is unable to make informed decisions, because of its inability to make use of knowledge and information obtained by scientists. Philip Kitcher and James Fishkin have both suggested therefore that on certain science-related issues, public policy should not be decided upon by actual democratic vote, but should instead conform to the public's Counterfactual Informed Democratic Decision (CIDD). Indeed, this suggestion underlies Kitcher's specification of an ideal of a well-ordered science. The paper argues that this suggestion misconstrues the normative significance of CIDDs. At most, CIDDs might have epistemic significance, but no authority or legitimizing force.
Keskin, Emre: Collective Success of Cosmological Simulations
I argue that cosmological simulations that aim to model the time-evolution of large-scale structure formation in the universeâ€”in addition to their numerical resultsâ€”yield more than what Wendy Parker calls "adequacy-for-purpose." I claim that it is possible obtain evidence from simulations confirming or disconfirming the underlying fundamental theories. Especially in the context of cosmology, the current examples of simulations yield such evidence. This shows simulations in cosmology have a particular fundamental strength in addition to producing numerical output, which is not apparent in climate modeling.
Kincaid, Harold: Open Empirical and Methodological Issues in the Individualism-Holism Debate
I briefly argue that some issues in the individualism-holism debate have been fairly clearly settled and that others are still plagued by unclarity. The main argument of the paper is that there are a set of clear empirical issues around the holism-individualism debate that are central problems in current social science research. Those include questions about when we can be holist and how individualist we can be in social explanation.
King, Martin: Idealization and Explanation in Physics
Many recent accounts of explanation acknowledge the importance of idealization. Alisa Bokulich offers her structural model account of explanation to allow for non-causal idealized models. However, the problem with opening explanation to idealizations is that many models may be counted as explanatory when they are only heuristic. The trajectories of semiclassical mechanics are literally false of quantum systems, but can be surprisingly effective in prediction. This paper takes a look at the difference between heuristic and explanatory power and argues that Bokulich seems to have conflated the two in her argument that semiclassical mechanics can be explanatory of quantum phenomena.
Klein, Colin: Brain Regions as Difference-Makers.
It is common to speak of brain regions for particular cognitive functions: regions for reading, for seeing faces, or for doing math. This suggests that brain regions have a single function which they always and uniquely perform. Advances in neuroscience show that this simple picture cannot be correct. I suggest that we ought to instead read 'region for' as designating brain areas that make a difference to the performance of personal-level activities. I discuss how brain regions can have specific and systematic relationships to personal- level activities, and argue that this combination allows neuroimaging constrain cognitive theories indirectly.
Knuuttila, Tarja Tellervo: Abstract and Concrete: Towards an Artifactual Theory of Fiction in Science
This paper presents a novel artifactual approach to fiction in science that addresses the shared features of models and fictions. It approaches both models and fictions as purposefully created entities, artifacts, which are constructed by making use of culturally established representational tools in their various modes and media. As intersubjectively available artifacts models and fictions have both abstract and concrete dimensions. Three further features that models and fictions share are discussed: constructedness, partial autonomy, and incompleteness. The account proposed gives a unified account of different model types and circumvents some problems of those approaches that consider models as imagined systems.
Kovaka, Karen: Biological Individuality and Scientific Practice
An issue that is largely neglected by the literature biological individuality concerns the methodological question of what sort of concept of individuality is needed for scientific practice. In this paper, I explore what characterizations of biological individuality are valuable for scientists engaged in empirical research. I consider two claims about the relationship between individuality and scientific practice. Against these two claims, I argue that the scientific value of any particular characterization of individuality lies in its ability to inform and direct future research, rather than its capacity to resolve practical problems.
Kronfeldner, Maria: When specificity trumps proximity
This paper analyzes the epistemic role of specificity and proximity in how scientists deal with causal complexity. I will, first, present evidence for the claim that scientists try to get rid of causal complexity by focusing on rather specific, ideally mono-causal relationships. This is likely to be uncontroversial, but in philosophy of science rarely noticed explicitly. I will, second, illustrate that specificity and proximity - understood as explanatory virtues, representation-dependent properties that a good explanation exhibits - can point in different directions. Even when specificity and proximity are explicitly addressed in the philosophical literature, this trade-off is ignored. The main claims that I will defend are that proximity and specificity are instrumental for stability, scope and parsimony and that - if in conflict - specificity regularly trumps proximity.
Lam, Vincent: In search of a primitive ontology for quantum field theory
Primitive ontology is a recently much discussed approach to the ontology of quantum theory according to which the theory is ultimately about entities in 3-dimensional space and their temporal evolution. This paper critically discusses the proposed primitive ontologies for quantum field theory in the light of the existence of unitarily inequivalent representations. These primitive ontologies rely either on a Fock space representation or a wave functional representation, which are strictly speaking unambiguously available only for free systems in flat spacetime. As a consequence, it is argued that they constitute only `effective ontologies' and are hardly satisfying as a fundamental ontology for quantum field theory.
Lamb, Maurice, and Chemero, Anthony: Understanding Dynamical in Cognitive Science
Neo-mechanists argue that in order for a claim to be an explanation in cognitive science it must reveal something about the mechanisms of a cognitive system. Recently, they claimed that JAS Kelso and colleagues have begun to favor mechanistic explanations of neuroscientific phenomena. We argue that this view results from a failure to understand dynamic systems explanations and the general structure of dynamic systems research. Further, we argue that the explanations by Kelso and colleagues cited are not mechanistic explanations and that neo-mechanists have misunderstood Kelso and colleagues' work, which blunts the force of one of the neo-mechanists' arguments.
Lee, Carole: Commensuration Bias in Peer Review
To arrive at their final evaluation of a manuscript or grant proposal, reviewers must convert a submission's strengths and weaknesses for heterogeneous peer review criteria into a single metric of quality or merit. I identify this process of commensuration as the locus for a new kind of peer review bias. Commensuration bias illuminates how the systematic prioritization of some peer review criteria over others permits and facilitates problematic patterns of publication and funding in science. Commensuration bias also foregrounds a range of structural strategies for realigning peer review practices and institutions with the aims of science.
Lehtinen, Aki: Derivational robustness and indirect confirmation
This paper examines the role of derivational robustness in confirmation, arguing that it allocates confirmation to individual assumptions, and thereby increases the degree to which various pieces of evidence indirectly confirm the robust result. If one can show that a result is robust, and that the various individual models used to derive it also have other confirmed results, these other results may indirectly confirm the robust result. Confirmation derives from the fact that data not known to bear on a result are shown to be relevant when it is shown to be robust.
Leonelli, Sabina: What Counts as Scientific Data? A Relational Framework
This paper proposes an account of scientific data that makes sense of recent debates on data-driven research, while also building on the history of data production and use particularly within biology. In this view, 'data' is a relational category applied to research outputs that are taken, at specific moments of inquiry, to provide evidence for knowledge claims of interest to the researchers involved. They do not have truth-value in and of themselves, nor can they be seen as straightforward representations of given phenomena. Rather, they are fungible objects defined by their portability and their prospective usefulness as evidence.
Li, Bihui: Coarse-Graining as a Route to Microscopic Physics: The Renormalization Group in Quantum Field Theory
The renormalization group has been characterized as merely a coarse-graining procedure that does not illuminate the microscopic content of quantum field theory (QFT), but merely gets us from that content, as given by axiomatic QFT, to macroscopic predictions. I argue that in the constructive field theory tradition, the RG techniques do illuminate the microscopic dynamics of a QFT, which are not automatically given by axiomatic QFT. RG techniques in constructive field theory are also rigorous, so one cannot object to their foundational import on grounds of lack of rigor.
Linquist, Stefan: Against Lawton's contingency thesis, or, why the reported demise of community ecology is greatly exaggerated.
Lawton's contingency thesis (CT) states that there are no useful generalizations ("laws") at the level of ecological communities because these systems are especially prone to contingent historical events. I argue that this influential thesis has been grounded on the wrong kind of evidence. CT is best understood in Woodward's (2010) terms as a claim about the instability of certain causal dependencies across different background conditions. A recent distinction between evolution and ecology reveals what an adequate test of Lawton's thesis would look like. To date, CT remains untested. But developments in genome and molecular ecology point in a promising direction.
Lisciandra, Chiara: Robustness Analysis as a Non-empirical Confirmatory Practice
Robustness analysis is a method of testing whether the predictions of a model are the unintended effect of the unrealistic assumptions of the model. As such, the method resembles the analysis, conducted in experimental sciences, to test the effect of possible confounders on the empirical results. The arguments in support of robustness analysis in non-experimental contexts, however, are often left implicit or are unreflectively imported from the experimental sciences. Throughout this paper, I defend the claim that the comparison of the results coming from models based on different tractability assumptions is in principle helpful, but that the method of conducting this sort of analysis is far from straightforward. I indicate some of the difficulties encountered in the practice and suggest possible alternatives, focusing on a case study in economic geography.
Lombardi, Olimpia, Forton, Sebastian, and Vanni, Leonardo: A pluralist view about information
Focusing on Shannon information, this article shows that, even on the basis of the same formalism, there may be different interpretations of the concept of information, and that disagreements may be deep enough to lead to very different conclusions about the informational characterization of certain physical situations. On this basis, a pluralist view is argued for, according to which the concept of information is primarily a formal concept that can adopt different interpretations that are not mutually exclusive, but each useful in a different specific context.
Magnus, P.D.: What the 19th century knew about taxonomy and the 20th forgot
The accepted narrative treats John Stuart Mill's Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill's 19th-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill's two notions offer separate answers to the two questions: natural groups for taxonomy, and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative which treats our natural kinds just as a development of Mill's Kinds.
Malinsky, Daniel: Hypothesis testing, "Dutch Book" arguments, and risk
Dutch Book arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to "classical" statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices (like fixed-level null hypothesis significance testing), partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists "don't bet". This paper examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting "Dutch Book"-type mathematical results with principles actually endorsed by practicing experimentalists.
Marcellesi, Alexandre: External Validity: Is There Still a Problem?
I first propose to distinguish between two kinds of external validity inferences, predictive and explanatory. I then argue that we have a satisfactory answer---formulated in slightly different ways by Cartwright and Hardie on the one hand and Pearl and Bareinboim on the other---to the question of the conditions under which predictive external validity inferences are valid. If this claim is correct, then it has two immediate consequences: First, some external validity inferences are deductive, contrary to the what is commonly assumed. Second, Steel's requirement that an account of external validity inference break what he calls the 'Extrapolator's Circle' is misplaced, at least when it comes to predictive external validity inferences.
Marcoci, Alexandru: Solving the Absentminded Driver Problem Through Deliberation
Piccione and Rubinstein  have suggested a sequential decision problem with absentmindedness in which there seem to be two equally compelling, but divergent, routes to calculating the expected utility of an agent's actions. The first route would correspond to an ex ante calculation while the latter to an ex interim calculation. Piccione and Rubinstein conclude that since the verdicts of the two calculations disagree they lead to an inconsistency in rational decision theory. In this paper I firstly argue that the ex ante route to calculating expected utility is not available in decision problems such as that introduced by Piccione and Rubinstein. The second part of the paper will explore the ex interim expected utility formula. This has been largely neglected in the literature and is always presented as only offering the agent a parametric optimal strategy in terms of his initial belief in being at the first decision node. I will argue that if we construe agents as maximising the ex interim expected utility in steps through a deliberative dynamics, then this formula can make a precise recommendation with regards to the driver's optimal strategy irrespective of his initial beliefs.
Martini, Carlo: The limits of trust in interdisciplinary science
In this paper I argue that lack of trust networks among researchers employed in interdisciplinary collaborations is potentially hampering successful interdisciplinary research. I use Hardwig's concept of epistemic dependence in order to explore the problem theoretically, and MacLeod and Nersessian's ethnographic studies in order to illustrate the problem from the viewpoint of concrete interdisciplinary science practice. I suggest that some possible solutions to the problem are in need of further exploration.
Matthews, Lucas: Embedded Mechanisms and Phylogenetics
The role and value of mechanisms in the process-oriented life sciences is quite clear (Machamer, Darden, and Craver 2000; Bechtel and Richardson 1997/2010; Darden 2006; Craver 2007; Bechtel 2008). Demonstrating the role and value of mechanisms in other domains of scientific investigation, however, remains an important challenge of scope for the new mechanistic account of explanation. This paper helps answer that challenge by demonstrating one valuable role mechanisms play in the pattern-oriented science of phylogenetics. Using the Transition-Transversion (ti/tv) rate parameter as an example, this paper argues that models embedded with mechanisms produce stronger phylogenetic tree hypotheses, as measured by Maximum Likelihood (ML) logL values. Two important implications for the new mechanistic account of explanation are considered.
Mayo-Wilson, Conor: Structural Chaos
Philosophers often distinguish between parameter error and model error. Frigg et. al. (2014) argue that the distinction is important because although there are methods for making predictions given parameter error and chaos, there are no methods for dealing with model error and ``structural chaos.'' However, Frigg et. al. (2014) neither define ``structural chaos'' nor explain the relationship between it and chaos (simpliciter). I propose a definition of ``structural chaos'', and I explain two new theorems that show that if a set of models contains a chaotic function, then the set is structurally chaotic. Finally, I discuss the relationship between my results and structural stability.
McCaffrey, Joseph: Neural Multi-Functionality and Mechanistic Role Functions
Multi-functionality presents new challenges for mapping the brain's functional topography. Cathy Price and Karl Friston argue that brain areas have many functions at one level of description and a single function at another. Thus, researchers need to develop new cognitive ontologies to obtain robust functional mappings. Colin Klein counters that this strategy will produce uninformative mappings. Therefore, researchers should relativize functional mappings to particular contexts. Using Carl Craver's concept of "mechanistic role functions" to illustrate that mechanistic components can be multi-functional in different ways, I argue that both accounts mistakenly construe brain areas as multi-functional in a preferred way.
Meketa, Irina: EXPERIMENT AND ANIMAL MINDS: WHY STATISTICAL CHOICES MATTER
Comparative cognition is the interdisciplinary study of nonhuman animal cognition. It has been criticized for systematically underattributing sophisticated cognition to nonhuman animals, a problem that I refer to as the underattribution bias. In this paper, I show that philosophical treatments of this bias at the experimental level have emphasized one feature of the experimental-statistical methodology (the preferential guarding against false positives over false negatives) at the expense of neglecting another feature (the default, or null, hypothesis). In order to eliminate this bias, I propose a reformulation of the standard statistical framework in comparative cognition. My proposal identifies and removes a problematic reliance on the value of parsimony in the calibration of the null hypothesis, replacing it with relevant empirical and theoretical information. In so doing, I illustrate how epistemic and non-epistemic values can covertly enter scientific methodology through features of statistical models, potentially biasing the products of scientific research. Broadly construed, this paper calls for increased philosophical attention to the experimental methodology and statistical choices.
Miller, Michael: Haag's Theorem and Successful Applications of Scattering Theory
Earman and Fraser (2006) clarifies how it is possible to give mathematically consistent calculations in scattering theory despite Haag's theorem. However, their analysis does not fully address the worry raised by the result. In particular, I argue that their approach fails to be a complete explanation of why Haag's theorem does not undermine claims about the empirical adequacy of particular quantum field theories. I then show that such empirical adequacy claims are protected from Haag's result by the techniques that are required to obtain theoretical predictions for realistic experimental observables. I conclude by advocating that Haag's theorem should be understood as providing information about the nature of the relation between the perturbative expansion and non-perturbative characterizations of quantum field theories.
Miller, Boaz: What is Hacking's Argument for Entity Realism Anyway?
According to Hacking's Entity Realism, unobservable entities that scientists carefully manipulate to study other phenomena are real. Although Hacking presents his case in an intuitive, attractive, and persuasive way, his argument remains elusive. I present five possible readings of Hacking's argument: a no-miracle argument, an indispensability argument, a transcendental argument, a Vichian argument, and a non-argument. I reconstruct Hacking's argument according to each reading, and review their prima facie strengths and weaknesses.
Miyake, Teru: Reference Models: Using Models to Turn Data into Evidence
Reference models of the earth's interior play an important role in the acquisition of knowledge about the earth's interior and the earth as a whole. Such models are used as a sort of standard reference against which data are compared. I argue that the use of reference models merits more attention than it has gotten so far in the literature on models, for it is an example of a method of doing science that has a long and significant history, and a study of reference models could increase our understanding of this methodology.
Muntean, Ioan: Genetic algorithms in scientific discovery: a new epistemology?
Based on a concrete case of scientific discovery presented by Schmidt and Lipson (2009), I argue that the evolutionary computation used has important consequences for the computational epistemology and for the philosophy of simulations in science. The genetic algorithms illustrate an "upward epistemology" from data to theories and is relevant in the context of scientific discovery. I explore the epistemological richness and novelty of this specific case study and extend my analysis to the more general framework of computer-aided scientific discovery. The evolutionary computation, has a reassuring epistemic status compared to previous attempts to use computers in scientific discovery and can draw a bridge between AI and the practice of biological sciences.
Nathan, Marco, and Love, Alan: The Idealization of Causation in Mechanistic Explanation
Causal relations among components and activities in mechanisms are intentionally misrepresented in the mechanistic explanations found routinely in the life sciences. Since these causal relations are the source of the explanatory power ascribed to descriptions of mechanisms, and advocates of mechanistic explanation explicitly recognize the importance of an accurate representation of actual causal relations, the reliance on these idealizations in explanatory practice conflicts with the stated rationale for mechanistic explanations. We argue that these idealizations signal an overlooked feature of reasoning in molecular and cell biology--mechanistic explanations do not occur in isolation--and suggest that explanatory practices within the mechanistic tradition share commonalities with the model-based science prevalent in population biology.
Nguyen, James: Why data models do not supply the target structure required by the structuralist account of scientific representation
Van Fraassen (2008) supplies an intriguing argument for the claim that data models provide the target-end structure required by structuralist accounts of scientific representation. This paper is a response to his argument. I first outline a variety of structuralist accounts, before turning to the question of target-end structure. I claim that if data models are invoked as supplying such structures, then it is unclear how scientific models represent physical targets. I argue that van Fraassen's answer to this question - that, pragmatically, for an individual scientist, representing a data model just is representing a target - is unsuccessful.
North, Jill: The Structure of Spacetime: a New Approach to the Spacetime Ontology Debate
I propose that we understand the debate about spacetime ontology as a debate about whether spatiotemporal structure is fundamental. This yields a novel argument for substantivalism. Even so, that conclusion can be overridden by future developments in physics. I conclude that the debate about spacetime ontology, properly understood, is a substantive dispute, which the substantivalist is currently winning.
Northcott, Robert: Opinion polling and election predictions
Election prediction by means of opinion polling is a rare empirical success story for social science, but one not previously considered by philosophers. I examine the details of a prominent case and draw two lessons of more general interest: 1) Methodology over metaphysics. Traditional metaphysical criteria were not a useful guide to whether successful prediction would be possible; instead, the crucial thing was selecting an effective methodology. 2) Which methodology? Success required sophisticated use of case-specific evidence from opinion polling. The pursuit of explanations via general theory or causal mechanisms, by contrast, turned out to be precisely the wrong path - contrary to much recent philosophy of social science.
Norton, Joshua: Weak Discernibility and Relations Between Quanta
Some authors (Muller and Saunder's 2008, Huggett and Norton, 2013) have attempted to defend Leibniz's identity of indicernibes through weakly discernibility. The idea is that if there is a symmetric, non-reflexive physical relation which holds between two particles, then those particles cannot be identical. In this paper I focus only on Muller and Saunder's account and argue that the means by which they achieve weak discernibility is not through a physical observable but an alternate mathematical construction which is both unorthodox and incomplete. Muller and Saunders build a map from slot labels to a set of observables and out of this map construct a weakly discerning formal relation. What Muller and Saunder's do not provide is a worked out account of how such maps pick out physical relations between particles.
Nyrup, Rune: How Explanatory Reasoning Justifies Pursuit: A Peircean View of IBE
This paper defends an account of explanatory reasoning generally, and inference to the best explanation in particular, according to which it first and foremost justifies pursuing hypotheses rather than accepting them as true. This side-steps the problem of why better explanations should be more likely to be true. Furthermore, I argue that my account faces no analogous problems. I propose an account of justification for pursuit and show how this provides a simple and straightforward connection between explanatoriness and justification for pursuit.
O`Neill, Elizabeth: Which causes of moral beliefs matter?
I argue that the distal causes of moral beliefs, such as evolution, are only relevant for assessing the epistemic status of moral beliefs in cases where we cannot determine whether a given proximal cause is reliable just by looking at the properties of that proximal cause. This means that any investigation into the epistemic status of moral beliefs given their causes should start with proximal causesâ€”not with evolution. I discuss two proximal psychological causes of moral beliefsâ€”disgust and sympathyâ€”to demonstrate the feasibility of drawing epistemic conclusions from an examination of proximal causes alone.
Overton, James: Explanation in Science
Practises of scientific explanation are diverse and so are philosophical accounts of scientific explanation. In this paper I propose a general philosophical account designed to handle this diversity. Using a large set of small case studies drawn from the journal Science, I argue for a single explain-relation that builds on different counterfactual-supporting "core" relations in different cases. Five familiar categories help us to classify scientific explanations, discover their form, and better understand the core relations. I close by considering how existing philosophical accounts fit with my evidence base and my general account.
Park, Ilho: Conditionalization and Credal Conservatism
This paper is intended to show an epistemic trait of the Bayesian updating rule. In particular, I will show in this paper that Simple/Jeffrey/Adams Conditionalization is equivalent to what I call Credal Conservatism, which says that when we undergo a course of experience, our credences irrelevant to the experience should remain the same.
Pashby, Thomas: Quantum Mechanics for Event Ontologists
In an event ontology, matter is `made up of' events. This provides a distinctive foil to the standard view of a quantum state in terms of properties possessed by a system. Here I provide an argument against the standard view and suggest instead a way to conceive of quantum mechanics in terms of probabilities for the occurrence of events localized in space and time. To that end I construct an appropriate probability space for these events and give a way to calculate them as conditional probabilities. I suggest that these probabilities can be usefully thought of as Lewisian objective chances.
Pence, Charles, and Ramsey, Grant: Is Organismic Fitness at the Basis of Evolutionary Theory?
Fitness is a central theoretical concept in evolutionary theory. Despite its importance, much debate has occurred over how to conceptualize and formalize fitness. One point of debate concerns the roles of organismic and trait fitness. In a recent addition to this debate, Elliott Sober argues that trait fitness is the central fitness concept, and that organismic fitness is of little value. In this paper, by contrast, we argue that it is organismic fitness that lies at the bases of both the conceptual role of fitness, as well as its role as a measure of evolutionary dynamics.
Perry, Zee: Intensive and Extensive Quantities
Quantities are properties and relations which exhibit "quantitative structure". For physical quantities, this structure can impact the non-quantitative world in different ways. In this paper I introduce and motivate a novel distinction between quantities based on the way their quantitative structure constrains the possible mereological structure of their instances. I borrow the terms 'extensive' and 'intensive' for these categories, though my use is substantially revisionary. I present and motivate this distinction using two case studies of successful physical measurements. (of mass and length, respectively). I argue that the best explanation for the success of the length measurement requires us to adopt my notion of extensiveness, which is distinct from what's sometimes called "additivity". I further discuss this distinction and its consequences, sketching an application of extensiveness for the project of producing a non-mathematical and non-metrical reductive metaphysics of quantity.
Pietsch, Wolfgang: Aspects of theory-ladenness in data-intensive science
Recent claims, mainly from computer scientists, concerning a largely automated and model-free data-intensive science have been countered by critical reactions from a number of philosophers of science. The debate suffers from a lack of detail in two respects, regarding (i) the actual methods used in data-intensive science and (ii) the specific ways in which these methods presuppose theoretical assumptions. I examine two widely-used algorithms, classificatory trees and non-parametric regression, and argue that these are theory-laden in an external sense, regarding the framing of research questions, but not in an internal sense concerning the causal structure of the examined phenomenon. With respect to the novelty of data-intensive science, I draw an analogy to exploratory as opposed to theory-directed experimentation.
Pincock, Chris: Newton, Laplace and Salmon on Explaining the Tides
Salmon cites Newton's explanation of the tides in support of a causal account of scientific explanation. In this paper I reconsider the details of how Newton and his successors actually succeeded in explaining several key features of the tides. It turns out that these explanations depend on elements that are not easily interpreted in causal terms. Often an explanation is obtained even though there is a considerable gap between what the explanation says and the underlying causes of the phenomenon being explained. More work is needed to determine the admissible ways in which this gap can be filled. I use the explanations offered after Newton to indicate two different ways that non-causal factors can be significant for scientific explanation. In Newton's equilibrium explanation, only a few special features of the tides can be explained. A later explanation deploys a kind of harmonic analysis to provide an informative classification of the tides at different locations. I consider the options for making sense of these explanations.
Pitts, James Brian: Historical and Philosophical Insights about General Relativity and Space-time from Particle Physics
Historians recently rehabilitated Einstein's "physical strategy" for GR. Independently, particle physicists similarly re-derived Einstein's equations for a massless spin 2 field. But why not a light _massive_ spin 2, like Neumann-Seeliger? Massive gravities are bimetric, supporting conventionalism over geometric empiricism. Nonuniquess lets one explain geometry via field equations, but not conversely. Massive gravity would have blocked Schlick's critique of Kant's synthetic a priori. Finally c. 1970 a dilemma appeared: massive spin 2 gravity was unstable or empirically falsified. GR was vindicated, but later and on better grounds. Recently dark energy and theoretical progress have made massive spin 2 gravity viable.
Povich, Mark: Mechanisms and Model-Based fMRI
Mechanistic explanations satisfy widely held norms of explanation: the ability to answer counterfactual questions and allowance for manipulation. A currently debated issue is whether any non-mechanistic explanations can satisfy these explanatory norms. Weiskopf (2011) argues that the models of object recognition and categorization, JIM, SUSTAIN, and ALCOVE, are not mechanistic, yet satisfy these norms of explanation. In this paper I will argue that these models are sketches of mechanisms. My argument will make use of model-based fMRI, a novel neuroimaging approach whose significance for current debates on psychological models and mechanistic explanation has yet to be explored.
Powers, John: Atrazine Research and Criteria of Characterizational Adequacy
The effects of atrazine on amphibians has been the subject of much research, requiring the input of many disciplines. Theory reductive accounts of the relationships among scientific disciplines do not seem to characterize well the ways that diverse disciplines interact in the context of addressing such complex scientific problems. "Problem agenda" accounts of localized scientific integrations seem to fare better. However, problem agenda accounts have tended to focus rather narrowly on scientific explanation. Attention to the details of atrazine research reveals that characterization deserves the sort of attention that problem agenda theorists have thus far reserved for explanation.
Richardson, Sarah: The Concept of Gender Bias in Science
This paper presents a definition and explication of "gender bias in science." Charges of gender bias in science are frequently misunderstood. When persuaded that a particular case of gender bias is also a case of bias by the standards internal to the field, scientific communities have often been responsive to such charges. However, many scientific researchers and research communities see gender bias in science as an extrascientific political, moral, or ethical problem rather than an epistemological failure. Among its merits, the proposed "contextualist-attributional" account of the concept of gender bias in science helps to explain why some scientific communities find such charges incoherent or irrelevant and points to more strategic approaches to engagement with scientific communities by feminist science analysts. This is shown by analysis of a case study of scientific response to a recent charge of gender bias in neuroendocrinology.
Rinard, Susanna: Imprecise Probability and Higher Order Vagueness
The tripartite model of belief (belief / disbelief / suspension of judgment) is accurate, but unspecific. The orthodox Bayesian (single-function) model is specific, but inaccurate (because we don't always have precise credences). The set of functions model is an improvement, but faces a problem analogous to higher order vagueness. Solving it, I argue, requires endorsing Insurmountable Unclassifiability, with a surprising consequence: no model can be both fully accurate and maximally specific. What we can do, though, is improve on existing models. I present a new model that is more specific than the tripartite model, and, unlike existing Bayesian models, perfectly accurate.
Robus, Olin: Does Science License Metaphysics?
Naturalized metaphysicians defend the thesis that science licenses metaphysics, such that only metaphysical results that are based on the best science are to be considered legitimate. This view is problematic, due to the fact that the reasons they identify for such license are apparently self-defeating. Chakravartty (2013) defends a revised approach to understanding the licensing relation. I argue that the proposed response is a step forward on behalf of naturalizing metaphysics, but still does not take seriously the contention that science involves, inextricably, a contribution from the a priori. I conclude by considering what options the aspiring naturalized metaphysician is left with.
Rohwer , Yasha: Iterated Theory of Mind and the Evolution of Human Intelligence
Did human intelligence evolve via an arms-race style competition between conspecifics (Flinn et al. 2005) or through collective action (Sterelny 2007, 2012)? I argue that to critically compare these two models it is necessary to focus on the nature of the particular cognitive capacities predicted by the unique selective pressure proposed by each model. Focusing on theory of mind, I conclude that the competitive model makes predictions unsupported by empirical evidence and that the cooperative model better accounts for our current theory of mind. This result has interesting implications for the evolution of prosocial behavior.
Romero, Felipe: Infectious Falsehoods
Published results influence subsequent research. False positives have a detrimental influence in the sense that they mislead scientists that rely on them. In some cases, false positives inspire large research programs, and have a systematic influence in the community. I call this phenomenon epistemic infection. In this paper (Section 1) I characterize the phenomenon, and (Section 2) study conditions that increase the risk of contagion in scientific communities. Then (Section 3) I argue that infections are an effect of contingent and defective incentive structures of contemporary science. As a case of study, I discuss the recent controversies in the social priming research program in social psychology.
Rosaler, Joshua: Is de Broglie-Bohm Theory Specially Equipped to Recover Classical Behavior?
Supporters of de Broglie-Bohm (dBB) theory argue that because the theory, like classical mechanics, concerns the motions of point particles in 3D space, it is specially suited to recover classical behavior. I offer a novel account of classicality in dBB theory, if only to show that such an account falls out almost trivially from results developed in the context of decoherence theory. I then argue that this undermines any special claim that dBB theory is purported to have on the unification of the quantum and classical realms.
Roush, Sherrilyn: The Epistemic Superiority of Experiment to Simulation
This paper defends the naive thesis that the method of experiment is epistemically superior to simulation, other things equal, a view that has been resisted by some philosophers writing about simulation. There are three challenges in defending this thesis. One is to say how "other things equal" can be defined, another to identify and explain the source of the epistemic advantage of experiment in a hypothetical comparison so defined. Finally, I must explain why this comparison matters, since it is not the type of situation scientists can expect often to face when they choose an experiment or a computer simulation.
Rubin, Hannah: The Phenotypic Gambit
The 'phenotypic gambit,' the assumption that we can ignore genetics and look at the fitness of phenotypes to determine the expected evolutionary dynamics of a population, is often used in evolutionary game theory. However, as this paper will show, an overlooked genotype to phenotype map can qualitatively affect dynamical outcomes in ways the phenotypic approach cannot predict or explain.
Runhardt, Rosa: Evidence for causal mechanisms in social science: recommendations from Woodward's manipulability theory of causation
In a backlash against the prevalence of statistical methods, recently social scientists have focused more on studying causal mechanisms. They increasingly rely on a technique called process-tracing, which involves contrasting the observable implications of several alternative mechanisms. Problematically, process-tracers do not commit to a fundamental notion of causation, and therefore arguably they cannot discern between mere correlation between the links of their purported mechanisms and genuine causation. In this paper, I argue that committing to Woodward's interventionist notion of causation would solve this problem: process-tracers should take into account evidence for possible interventions on the mechanisms they study.
Ruphy, Stephanie: Which forms of limitation of the autonomy of science are epistemologically acceptable (and politically desirable)?
This paper will investigate whether constraints on possible forms of limitation of the autonomy of science can be derived from epistemological considerations. Proponents of the autonomy of science often link autonomy with virtues such as epistemic fecundity, capacity to generate technological innovations and capacity to produce neutral expertise. I will critically discuss several important epistemological assumptions underlying these links, in particular the "unpredictability argument". This will allow me to spell out conditions to be met by any form of limitation of the autonomy of science to be epistemologically acceptable. These conditions can then be used as a framework to evaluate possible or existing forms of limitations of the autonomy of science. And it will turn out that the option of direct public participation (a lively option in philosophy of science today) might not be the best way to go to democratize the setting of research agenda.
Rusanen, Anna-Mari: On Relevance
Relevance is one of the key concepts in literature on modeling. However, the notion of relevance is often left unspecified. In this paper a sketch for a general notion of relevance is outlined. In addition, it will be argued that there are at least two interpretations of relevance available in the literature on modeling. The first one is ontic, and the second one is pragmatic.
Sample, Matthew: Stanford's Unconceived Alternatives from the Perspective of Epistemic Obligations
Kyle Stanford's reformulation of the problem of underdetermination has the potential to highlight the epistemic obligations of scientists. Stanford, however, has presented the phenomenon of unconceived alternatives as a problem for realists, despite his critics' insistence that we have contextual explanations for scientists' inability to conceive of their successors' theories. I propose that the concept of "role oughts," as discussed by Richard Feldman, can help pacify Stanford's critics and reveal the broader relevance of the "new induction." The possibility of unconceived alternatives pushes us to question our contemporary expectation for scientists to reason outside of their historical moment.
Savitt, Steven: I ♥ ♦s
Richard Arthur (2006) and Steven Savitt (2009) proposed that the present in (time-oriented) Minkowski spacetime should be thought of as a small causal diamond. That is, given two timelike separated events p and q, with p earlier than q, they suggest that the present (relative to those two events) is the set I+(p) ∩ I-(q). Mauro Dorato (2011) presents three criticisms of this proposal. I rebut all three and then offer two more plausible criticisms of the Arthur/Savitt proposal. I argue that these criticisms also fail.
Sebens, Charles: Killer Collapse: Empirically Probing the Philosophically Unsatisfactory Region of GRW
GRW theory offers precise laws for the collapse of the wave function. These collapses are characterized by two new constants, λ and σ. Recent work has put experimental upper bounds on the collapse rate, λ. Lower bounds on λ have been more controversial since GRW begins to take on a many-worlds character for small values of Î». Here I examine GRW in this odd region of parameter space where collapse events act as natural disasters that destroy branches of the wave function along with their occupants. Our continued survival provides evidence that we don't live in a universe like that.
Shavit, Ayelet: You Can't Go Home Again - or Can You? 'Replication' Indeterminacy and 'Location' Incommensurability in Three Biological Re-Surveys
Reproducing empirical results and repeating experimental processes is fundamental to science, but is of grave concern to scientists. Revisiting the same location is necessary for tracking biological processes, yet I argue that 'location' and 'replication' contain a basic ambiguity. The analysis of the practical meanings of 'replication' and 'location' will strip of incommensurability from its common conflation with empirical equivalence, underdetermination and indeterminacy of reference. In particular, I argue that three biodiversity re-surveys, conducted by the research institutions of Harvard, Berkeley, and Hamaarag, all reveal incommensurability without indeterminacy in the smallest spatial scale, and indeterminacy without incommensurability in higher scales.
Shen, Jian: Gradual Revelation: A Signaling Model
Most models discussed in the sender-receiver literature inspired by Lewis and Skyrms assume that the sender has unchanging knowledge of the world. This paper explores the consequence of dropping that assumption, and proposes a gradual revelation model in which the sender finds out about the world little by little. This model is then used to model the distinction between indicative and imperative contents, which I argue it does better than traditional models.
Sheredos, Benjamin: Ontic accounts of explanation cannot support norms of generality and systematicity
Recent attempts to unify the ontic and epistemic approaches to (mechanistic) explanation propose that we simply pursue epistemic and ontic norms in tandem. I aim to upset this armistice. There are epistemic norms of attaining general/systematic explanations which we cannot fulfill if we are constrained always to fulfill ontic norms of explanation. Put another way, (some) epistemic norms are autonomous of and in tension with ontic norms. Put a third way, radically distinct forms of explanation are required to fulfill ontic and (some) epistemic norms. A result is that some central arguments put forth by ontic theorists against epistemic theorists are revealed as not only question-begging, but ultimately self-defeating.
Skillings, Derek: Mechanistic Explanation of Biological Processes
Biological processes are often explained by identifying the underlying mechanisms that generate a phenomenon of interest. I characterize a basic account of mechanistic explanation and then present three challenges to this account, illustrated with examples from molecular biology. The basic mechanistic account is insufficient for explaining: 1) non-sequential and non-linear dynamic processes, 2) the inherently stochastic nature of many biological mechanisms, and 3) fails to give a proper framework for analyzing organization. I suggest that biological processes are best approached as a multi-dimensional gradient--with some processes being paradigmatic cases of mechanisms and some processes being marginal cases.
Slater, Matthew: In Favor of the (Possible) Reality of Race
Disputes over the reality of race â€” in particular, whether there are races are natural kinds â€” have often foundered on an inability to make clear sense of the sense in which race can be biologically real and yet (also) socially constructed. I sketch an account of natural kinds that gives specific content to this possibility and argue for the tenability of treating races as genuine kinds on this account, even if they fail to be "biologically real".
Stanford, P. Kyle: Catastrophism, Uniformitarianism, and a Realism Dispute that Makes a Difference
In support of Stanford's problem of unconceived alternatives and against his critics, I argue that contemporary scientific communities may well be no better and are perhaps even substantially worse than their historical predecessors as vehicles for discovering, developing, and exploring fundamentally distinct alternatives to existing scientific theories. I then argue that even recognizing the need to confront this question invites us to reconceive what is most fundamentally at issue in the debate concerning scientific realism in a way that ensures that the position we take in that debate actually makes some difference to how we conduct the scientific enterprise itself.
Stevens, Syman: The Dynamical Approach as Practical Geometry
This essay introduces Harvey Brown and Oliver Pooley's 'dynamical approach' to special relativity and argues that it is best construed as a relationalist form of Einstein's 'practical geometry', according to which Minkowski geometrical structure supervenes upon the symmetries of the best-systems dynamical laws for a material world with primitive topological or differentiable structure. This construal of the dynamical approach is shown to be compatible with the related chapters of Brown's text, as well as recent descriptions of the dynamical approach by Pooley and others.
Stoeltzner, Michael: On Virtues and Vices of Axiomatic Quantum Field Theory
I analyze the recent debate between Fraser and Wallace whether axiomatic approaches to quantum field theory (AQFT) represent the preferred starting point for philosophers or whether conventional, Lagrangian-based quantum field theory (CQFT) scores better, not least because this research program provides testable models for particle physics while AQFT only delivers general insights. I argue that Fraser's underdetermination argument does not provide a good defense for AQFT and compare both research programs within a suitably modified Lakatosian context. This however requires a more opportunistic attitude towards the axiomatic method that is in line with the actual developments within mathematical physics.
Tabb, Kathryn: After Psychiatric Kinds: Diagnosis Specificity and Progress in Psychiatric Research
The failure of psychiatry to validate its constructs is often attributed to the use of operational rather than etiological diagnostic criteria. Recently, however, the National Institute of Mental Health has proposed a new diagnosis: the problem is psychiatry's focus on validating psychiatric kinds rather than domains of functioning implicated in psychopathology. I support this view by arguing that behind psychiatry's failure to validate its nosology is not a metaphysical but an epistemological one, which I call diagnosis specificity: the assumption that the diagnostic act correctly identifies the patient's condition as belonging to a homogeneous type that allows for ampliative inferences.
Theurer, Kari: More Information, Better Explanations: Reductionism in Biological Psychiatry
I introduce a model of reduction in neuroscience inspired by Kemeny and Oppenheim's (1956) account, which was quickly dismissed as being too weak to support actual reductions. This view has been unjustifiably overlooked. I sketch a mechanistic model of Kemeny and Oppenheim reduction, on which reduction requires demonstrating that the reducing mechanism has an explanatory scope at least as great as the phenomenon to be reduced. In order to demonstrate that this model is already hard at work in current neuroscience, I draw upon recent reductionistic research in biological psychiatry concerning the etiology of schizophrenia.
Tulodziecki, Dana: Realist continuity, approximate truth, and the pessimistic meta-induction
The pessimistic meta-induction (PMI) seeks to undercut the realist's alleged connection between success and (approximate) truth by by arguing that highly successful, yet wildly false theories are typical of the history of science. Realist responses to the PMI try to rehabilitate this connection by stressing various kinds of continuity between earlier and later theories. Here, I argue that these extant realist responses are inadequate, by showing - through the example of the 19th century miasma theory of disease - that there are cases of genuinely successful, yet false theories, that do not exhibit any of the required realist continuities
Vassend, Olav: Confirmation Measures and Sensitivity
Stevens (1946) draws a useful distinction between ordinal scales, interval scales, and ratio scales. Most recent discussions of confirmation measures have proceeded on the ordinal level of analysis. In this paper, I give a more quantitative analysis. In particular, I show that the requirement that our desired confirmation measure be at least an interval measure naturally yields necessary conditions that jointly entail the log-likelihood measure. Thus I conclude that the log-likelihood measure is the only good candidate interval measure.
Vorms, Marion: Spatial representations in science: towards a typology
We have an intuitive idea of the distinction between "images" and linguistic representations. Together with this intuitive distinction comes the (intuitive) claim that images are less "abstract", by enabling us to "visualize" objects, relations, or processes. Spelling out this distinction and the associated claims regarding abstractness and visualization, however, is far from a trivial task. Acknowledging that non linguistic representations play an essential role in scientific theorizing, I aim at contributing to this enterprise, by distinguishing two broad types of spatial representations, and highlighting the different sorts of theorizing and abstraction processes associated with these different types.
Wagenknecht, Susann: A double notion of knowing and knowledge
This paper addresses the conceptual gap between knowledge notions in general epistemology and philosophy of science. It suggests a double notion of individual knowing (as a form of believing) and collaboratively created knowledge (as discursive content), and highlights the dis-/continuity of knowing and knowledge thus understood. Thereby, it contributes to social epistemology's discussion of collective knowledge and generates novel questions in the analysis of collaborative scientific practice.
Walsh, Kirsten: Phenomena in Newton's Principia
Newton described his Principia as a work of 'experimental philosophy', where theories were deduced from phenomena. He introduced six 'phenomena': propositions describing patterns of motion, generalised from astronomical observations. However, these don't fit Newton's contemporaries' definitions of 'phenomenon'. Drawing on Bogen and Woodward's (1988) distinction between data, phenomena and theories, I argue that Newton's 'phenomena' were explanatory targets drawn from raw data. Viewed in this way, the phenomena of the Principia and the experiments from the Opticks were different routes to the same end: isolating explananda.
Walsh, Elena: Top-Down 'Causation' and Developmental Explanation
In recent years developmental psychologists have begun to describe intentional attitudes (such as emotions) as 'emergent properties' of complex systems. Emergent properties are often thought to exert a 'top-down' constraint or causal influence on their physical realisers. This paper argues that the notion of top-down constraint plays an essential role in developmental explanation. It also employs this notion to suggest that folk psychology and neuroscience should be viewed as dependent and complementary explanatory modes.
Werndl, Charlotte, and Frick, Roman: Rethinking Boltzmannian Equilibrium
Boltzmannian statistical mechanics partitions the phase space into macroregions, and the largest of these is identified with equilibrium. What justifies this identification? Common answers focus on Boltzmann's combinatorial argument, the Maxwell-Boltzmann distribution, and maximum entropy considerations. We argue that they fail and present a new answer. We characterise equilibrium as the macrostate in which the system spends most of its time and prove a new theorem establishing that equilibrium thus defined corresponds to the largest macroregion. Our derivation is completely general in that it does not rely on assumptions about the system's dynamics or internal interactions.
Wiegman, Isaac: Evidential Criteria of Homology: Adjudicating Competing Homology Claims
While the homology concept has taken on importance in thinking about the nature of psychological kinds (e.g. Griffiths 1997), no one has shown how comparative psychological and behavioral evidence can distinguish between competing homology claims. I adapt the operational criteria of homology to accomplish this. I consider two competing homology claims that compare human anger with putative aggression systems of nonhuman animals, and demonstrate the effectiveness of the criteria in adjudicating between these claims.
Woody, Andrea: Re-orienting Discussions of Scientific Explanation: A Functional Perspective
Most literature on scientific explanation presumes proper analysis rests at the level of individual explanations, but there are other options. Shifting focus from explanations, as achievements, toward explaining, as a coordinated activity of communities, the functional perspective aims to reveal how the practice of explanatory discourse functions within scientific communities. Here I outline the functional perspective and argue that it reveals an important methodological role for explanation in science, which consequently provides resources for developing more adequate responses to traditional concerns, including how best to conceive of explanatory power as a theoretical virtue. Consideration of the ideal gas law grounds the discussion.
Wright, Jake: The Moral of the Story: What Does the Evolutionary Contingency Thesis Teach Us About Biological Laws?
Beatty's  Evolutionary Contingency Thesis has generated a number of responses. I examine three (Brandon , Sober  and Mitchell ) and present a synthesized response to Beatty based primarily on Mitchell. Like Mitchell, I argue Beatty's thesis shows we should view laws pragmatically. Contra Mitchell, I argue this pragmatic view must maintain distinctions between types of lawful generalization. My argument responds to Mitchell's concern that disciplines seeking naturally necessary generalizations will be privileged over disciplines that do not. Because different types of generalization aim at different goals, we will favor naturally necessary generalizations only if they can achieve goals no other generalization could achieve.
Wuethrich, Adrian: The Higgs Discovery as a Diagnostic Causal Inference
I describe how the discovery of elementary particles, such as the Higgs boson, is a case of causal inference. The case illustrates how Lipton's challenge of inferred differences can be met even in a paradigm case of ``unobservable'' causes of which the mere existence has to be established. The view of the discovery of elementary particles as a causal inference has several likely consequences, most of them attractive, concerning the role of theory and the problems of selection bias and unconceived alternatives.
Zautra, Nicholas: Embodiment, Interaction, and Experience: Toward a Comprehensive Model in Addiction Science
Current models attempt to specify how addiction is developed, how it is maintained, and how people can recover from it. In this paper, I explain why none of these theories can be accepted as a comprehensive model of addiction. I argue that current models fail to account for differences in embodiment, interaction processes, and the experience of addiction. To redress these limiting factors, I design a proposal for an enactive account of addiction that complements the enactive model of autism proposed by Hanne De Jaegher.
Zednik, Carlos: Are Systems Neuroscience Explanations Mechanistic?
Whereas most branches of neuroscience are thought to provide mechanistic explanations, systems neuroscience is not. Two reasons are typically cited in support of this conclusion. First, systems neuroscientists rarely, if ever, rely on the dual strategies of decomposition and localization. Second, they typically emphasize organizational properties over the properties of individual components. In this paper, I argue that neither reason is conclusive: researchers might rely on alternative strategies for mechanism discovery, and focusing on organization is often appropriate and consistent with the norms of mechanistic explanation. Thus, many explanations in systems neuroscience can also be viewed as mechanistic explanations.
Zhang, Jiji: Likelihood and Consilience: On Forster's Counterexamples to the Likelihood Theory of Evidence
Forster presented some interesting examples having to do with distinguishing the direction of causal influence between two variables, which he argued are counterexamples to the likelihood theory of evidence (LTE). In this paper, we refute Forster's arguments by carefully examining one of the alleged counterexamples. We argue that the example is not convincing as it relies on dubious intuitions that likelihoodists have forcefully criticized. More importantly, we show that contrary to Forster's contention, the consilience-based methodology he favored is accountable within the framework of the LTE.
Zheng, Robin: Responsibility, Causality, and Social Inequality
I explore the intertwined philosophical and social scientific research on the fundamental attribution error and causal attributions for poverty in the United States. I expose the way in which what appear to be empirical disputes about causes turn out to be fundamentally political and moral disagreements based on normative expectations about the distribution of powers and social roles that could have prevented an event or state. Thus, moral philosophers who work to reshape normative expectations also play a role in restructuring causal explanations, and hence interventions, for problems like poverty and social inequality.
Zollman, Kevin: The handicap principle is an artifact
The handicap principle is one of the most influential ideas in evolutionary biology. It asserts that when there is conflict of interest in a signaling interaction signals must be costly in order to be reliable. We show how the handicap principle is a limiting case of honest signaling, which can also be sustained by other mechanisms. This fact has gone unnoticed because in evolutionary biology it is a common practice to distinguish between cues, indexes and fakable signals, where cues provide information but are not signals and indexes are signals that cannot be faked. We find that the dichotomy between indexes and fakable signals is an artifact of the existing signaling models. Our results suggest that one cannot adequately understand signaling behavior by focusing solely on cost. Under our reframing, cost becomes one---and probably not the most important---of a collection of factors preventing deception.