Bergen Philosophy of Science Workshop (I)
On Friday 21 November 2025, the Department of Philosophy will host the 13th edition of the annual Bergen Philosophy of Science Workshop.
Program
9.45 Coffee & welcome
10-11.30 Jon Williamson (Manchester)
- Causal inference is not statistical inference: how Evidential Pluralism mitigates the replication crisis
11.30-13.00 Stephan Hartmann (MCMP/LMU Munich)
- Framework Fundamentality
13.05-14.15 lunch
14.15-15.45 Vera Matarese (Perugia)
- NOISE: Noisy Ontology Informing Scientific rEalism
15.50-17.20 Jo Wolff (Edinburgh)
- Is ‘ground truthing’ always the best measurement?
17.30-19 Emiliano Ippoliti (Sapienza University of Rome) Online
- From unreasonable effectiveness to reasonable ineffectiveness: the epistemic cycle of mathematics on Wall Street
19.30 Dinner
****************************
Abstracts
- Causal inference is not statistical inference: how Evidential Pluralism mitigates the replication crisis (Jon Williamson, Manchester)
I introduce two views about the connection between causal inference and statistical inference: a weak and a strong view. According to the weak view, statistical techniques are useful for causal enquiry. According to the strong view, causal enquiry is a purely statistical problem. I argue that there has been a trend from the weak view, which was advocated by R.A. Fisher and Austin Bradford Hill for example, to the strong view. Indeed, methods for causal enquiry are now usually couched as purely statistical methods: e.g., the analysis of randomised controlled trials and observational studies, meta-analysis, and model-based approaches such as structural equation modelling and graphical causal modelling.
I suggest that this trend is pernicious because it has contributed to the replication crisis that is currently plaguing the health and social sciences. That observed associations are not replicated by subsequent studies is a part of normal science. A problem only arises when those associations are taken to establish causal claims: a science whose established causal claims are constantly overturned is indeed in crisis. The strong view leads to this problem because it tends to establish causal claims on the basis of associations of one sort or another.
I argue that Evidential Pluralism, an emerging philosophical account of causal enquiry, offers a way out of this crisis, by helping to avoid fallacious inferences from association to causation. According to Evidential Pluralism, causal inference requires a combination of statistical inference and mechanistic inference. Evidential Pluralism is thus allied to the weak view of the relationship between causal inference and statistical inference: statistical inference is important for causal enquiry, but not the whole story.
- Framework Fundamentality (Stephan Hartmann, MCMP/LMU Munich)
In the literature on scientific theories, one encounters three terms that are all too often not carefully distinguished from one another: ‘framework’, ‘theory’ and ‘model’. They are ordered here according to their generality, i.e. how ‘far away’ they are conceptually from the specific target systems. After discussing these three terms, I introduce three types of framework fundamentality: (i) ontic fundamentality, which refers to the objects described by a theoretical framework, (ii) epistemic fundamentality, which refers to our knowledge of these objects, and (iii) explanatory fundamentality. To illustrate these notions and to make them useful, I examine the physics of open quantum systems as a case study. Here we encounter two different frameworks – the standard quantum open systems theory (ST) and the general quantum theory of open systems (GT) – and I will argue that GT is more fundamental than ST. The talk is based on joint work with Mike Cuffaro.
- NOISE: Noisy Ontology Informing Scientific rEalism (Vera Matarese, Perugia)
Noise is typically considered as a threat to scientific and measurement realism, as it impedes our ability to achieve a faithful representation of physical reality. In this talk, I present my next project ‘NOISE: Noisy Ontology Informing Scientific rEalism’, which aims to radically challenge this view. First of all, the project shows why and how we need to engineer a new concept of noise, one that is faithful to both scientific practice and its physical features. Secondly, the project aims to reconceive both scientific and measurement realism in light of this new concept of noise. Indeed, only by recognizing noise as rooted in fundamental ontological features of our reality and as epistemically fruitful for scientific knowledge, can one develop a more robust and science-informed philosophical account of scientific and measurement realism.
- Is ‘ground truthing’ always the best measurement? (Jo Wolff, Edinburgh)
Across the earth sciences, and increasingly also in fields like archaeology, remote sensing has become an important research tool. Yet, remote sensing techniques are typically thought to require validation and calibration through ground truthing (GISGeography 2023). Ground truthing is often understood literally here: as calibrating and validating the remote measurement against measurements taken on the ground, which are assumed to be the best available measurement. This conception of the ground measurement as the best available measurement has recently been challenged by Woodhouse (2021), who examines three different cases of remote and ground measurement to argue that ground truth is the best available measurement only in cases of nominal measurement (i.e., classification).
In this paper, I take up Woodhouse’s challenge and the question of what reasons we might have for taking the ground measurement to be the best available measurement.
My main thesis is that we should indeed not assume that on the ground measurements equate to the best available measurement in all cases. However, this is so for reasons other than the ones advanced by Woodhouse and, as a result, it is not the case that the preference for ground truthing is restricted to the case of nominal measurement.
- From unreasonable effectiveness to reasonable ineffectiveness: the epistemic cycle of mathematics on Wall Street (Emiliano Ippoliti, Sapienza University of Rome)
The question of mathematics’ effectiveness in describing and predicting phenomena in the natural and social sciences continues to provoke substantial philosophical debate. The arguments crystallize into three main positions: unreasonable effectiveness (Wigner 1960, Hamming 1980), reasonable effectiveness (Mac Lane 1990, Lakoff and Núñez 2000, and Longo 2005), and reasonable ineffectiveness (Cellucci 2015, 2022), each offering a distinct perspective on the role and limits of mathematics.
In this paper, I argue that the question of mathematical effectiveness takes on a distinctive form when applied to the modelling and prediction of Wall Street—intended as a symbol of the pulse of global finance—where epistemic tools have ontological effects. Since financial markets are performative and reflexive, the relationship between mathematics and finance cannot be understood merely as one of passive representation. Rather, mathematics in finance assumes an active and constitutive role—shaping epistemic state of the markets, decisions, and then transforming market structures (ontology) through its very use. I argue that, viewed from this angle, the effectiveness of mathematics in finance follows a cyclical pattern.
It begins as unreasonably effective, since the success of mathematical models arises from their ability to formalize genuine patterns—statistical regularities, behavioral tendencies, and arbitrage mechanisms—that do exist within financial systems, even if they rest on contested, unreasonable assumptions. Their elegance and internal coherence further reinforce this impression, endowing them with an aura of universality, as though the equations reveal an underlying order hidden beneath the market’s apparent chaos. Then mathematics becomes reasonably effective (as models are institutionalized and agents coordinate around them) and ultimately turns reasonably ineffective (as recursive behavioral responses generate ontological systemic fragility).
I show that this epistemic cycle is vividly illustrated by the case of mathematical model knows as Value-at-Risk (VaR) during the 2008 global financial crisis. The widespread use of VaR as a quantitative risk model produced a systemic underestimation of tail risks, grounded in assumptions that failed under stress: efficient markets, Gaussian distributions of returns, rational behavior, and historical stationarity. These assumptions lent an initial illusion of precision that masked deeper radical uncertainty, thereby leading to epistemic failure emerging from initial success.
I conclude by suggesting that this cyclical pattern challenges the notion that Wall Street can be governed by universal or static mathematical laws, and that it calls for the development of an ethics of mathematics in finance—an awareness of the moral and epistemic consequences of modeling the market world.