top of page

Philosophy of Science 101: Scientific Models


The Philosophy of Science series explores both general questions about the nature of science and specific foundational issues related to the individual sciences. Where applied to such subject areas, philosophy is particularly good at illuminating our general understanding of the sciences. This 101 series will investigate what kinds of serious—often unanswered—questions a philosophical approach to science exposes through its heuristic lens. This series, more specifically, will look at the ‘Scientific Realism’ debate throughout, which questions the very content of our best scientific theories and models.

Philosophy of Science 101 will be divided into the following chapters of content:

1. Philosophy of Science 101: The Relationship Between Philosophy and Science

2. Philosophy of Science 101: Scientific Realism

3. Philosophy of Science 101: Anti-Realism

4. Philosophy of Science 101: Realism and Anti-Realism ‘Compromise’

5. Philosophy of Science 101: Causation

6. Philosophy of Science 101: Scientific Models

7. Philosophy of Science 101: Models of Explanation

8. Philosophy of Science 101: Laws of Nature

9. Philosophy of Science 101: Science and Social Context

Philosophy of Science 101: Scientific Models

As discussed in the previous—most recent—article of this series, the study of causation (i.e., what may account for causation and the relation between a cause c and an effect e) is central to the philosophy of science. There are many different types of accounts and analyses of causation which also further explore various causal models. Many scientific models are in fact ‘representational’ models here since they (supposedly) can represent a selected part or aspect of the world such as causation, known as the model’s ‘target system’ (Frigg and Hartmann, 2006). How might a scientific model represent a target system, though? How does a scientific model make a particular part or feature of the world easier to understand, visualise, quantify, define, or simulate? Further, how can a scientific model reference existing and commonly accepted ‘knowledge’ of the given target system? (von Neumann, Taub and Taub, 1963). This article will delve into these very questions, thus exploring another significant and influential part of the philosophy of science enterprise. To this end, this part of the 101 series will (i) introduce the philosophical questions surrounding scientific models and their aims, (ii) consider different types of models, and finally (iii) look at scientific representation in modelling. This is with a view to investigating models of explanation in the following article of the series.

Figure 1. Climate models divide Earth into a grid with vertical and horizontal intervals (Bonpote, 2022).

The Basic Idea: Models in Science Matter

Modelling is a crucial part of scientific practice and requires selecting and identifying relevant target systems (from the real world) to then develop a model which replicates such a system with those features. Consider a few examples of models which highlight why and how this endeavour is an inseparable part of science:

  1. Bohr’s (1913) model of the atom

  2. The Lorenz (1967) system model of the atmosphere

  3. Billiard Ball model of a gas (Egger and Carpi, 2008).

This list may be extended ad nauseum, providing cases in point of the central importance of models in many scientific contexts. This importance has notably been increasingly recognised by philosophers too, with philosophical literature on models growing rapidly over the last decades in line with the number of different types of models (Frigg and Hartmann, 2006). As mentioned, such models often represent their target system (i.e., a particular part or aspect of the world). The atom in Bohr’s (1913) model is the target system, for example, just like the atmosphere is the target system of the Lorenz (1967) model. The same goes for target systems which scientists cannot perform experiments on. It is not possible to perform certain experiments, for instance if the target is too far away (e.g., stars), too large to intervene on (e.g., the solar system), perhaps ethically irresponsible to intervene on (e.g., heart function), or even impossible to intervene on due to the nature of the system (e.g., the stock market). Target systems are often difficult—or are not possible—to experiment on, hence the reaction is to build a model of the system and to study it. Scientists may study the model in order to learn about the model’s target (whether the target is easily ‘intervenable’ or not). One can learn about gravity arising from the Sun by studying the Newtonian model of the solar system, for example. One can also learn about some of the unpredictable weather via the Lorenz system model. Likewise, one may learn all about predator and prey interaction in the Lotka-Volterra model (Zhu and Yin, 2009). Since models are representations of their target systems, one can learn from them, therefore giving rise to many philosophical questions on scientific models as such.

Major philosophical questions in this area concern a model’s ability to represent something and yield knowledge as a result. When model assumptions are false—and it must be noted that some of them are dramatically false—one can draw minimal lessons. False representation produces the kinds of models that are not descriptions of facts. Yet, models are supposed to tell us something about the world. Serious philosophical questions must therefore be asked about what it is exactly a model represents and how it does so (as this article will discuss specifically later). Ontological questions also arise since it is important to understand what a model is. For example, what is the famous Fibonacci model? What does it consist of? Is it the equation? Is it the sequence? Or is it the model assumptions? (Frigg and Hartmann, 2006). Somewhat similarly, what is it in a model that provides truth? The philosopher of science must ask what kind of internal structure can generate results in a model, especially when some claims are true and some are false in the same model. Further epistemological questions are asked here too, for the philosopher must suss out how to learn about a model and about what is true in a model. Models are notably quite similar to theories in science, so the similarities and differences between the two must also be pointed out. To use Fibonacci as an example again, the model is independent from the theory (Frigg, 2002). Not all models are like this, however, and models are in fact often related to theories in several ways (Bailer-Jones, 2002). On top of ontological, epistemological, and semantic questions in philosophy concerning scientific models, other topics in and around the philosophy of science crop up too. Questions concerning the explanatory power of models arise, for instance, as do questions on the use of models in the scientific realism debate (discussed earlier in articles 2, 3, and 4 of the series). The types of philosophical questions asked—and how they are answered—of course depend on the particular model in question in the first place. There are three ‘main’ types of models, as this article will explore before investigating the problem of representation specifically.

Figure 2. Fibonacci retracement levels—stemming from the Fibonacci model—are horizontal lines on a stock chart that indicate where support and resistance are likely to occur (Jiang, 2021).

Three Fundamental Kinds of Models

Scientific models, roughly, can be understood as representations. There are three fundamental types of such models concerning what is being represented (Treagust, Chittleborough, and Thapelo, 2002):

  1. Models of phenomena

  2. Models of theory

  3. Models of data.

Models of phenomena have been discussed, albeit briskly and without mentioning the ‘model of phenomena’ title. This type of model is a model of a selected part or aspect of the world – a phenomenon (also known as a ‘target system’). Bohr’s (1913) model of the atom is therefore a good example, with the atom being the particular aspect (or phenomenon) of the world that the model is representing. By the same token, the atmosphere is the target system in the Lorenz model. Models of theory then differ, bringing logic into the picture. Whereas a theory is a set of sentences in a formal language, a model (of theory) is a structure that makes all the sentences of the theory true (Frigg and Hartmann, 2006). Consider a simple example:

Theory T: ∀x (Fx Gx)


- The set S = {S1, … S100} consisting of all objects in a room.

- Let F predicate “is a wall”.

- Let G predicate “is painted white”.

Sentence ∀x (Fx Gx) is true in S. Therefore, S is a model of the theory T.

Now consider a more serious example from Euclidean Geometry. Here, a structure is a ‘model’ in the sense that the model is what the theory is about. Sometimes it is said that logical models are an interpretation of the theory, or that they satisfy the theory (Frigg, 2002). Yet, such a model S is not itself about anything. It is just a set of objects. Hence, models in the logical sense are not ipso facto models of phenomena (Putnam, 1969). One could argue that models are multi-functional, however, since many models in science are both models of phenomena and theory in various respects. The Newtonian Model of the Sun-Earth system (Pal, Abouelmagd, and Kishor, 2021), for instance, where the model satisfies the Newton Theory of Motion (being a model of theory) whilst representing a target system (i.e., the sun-earth system – being a model of phenomena). This is unfortunately untenable as a general account since (i) it is not universal and (ii) due to relations between models (i.e., the type of fact that is true or false of two models but not usually straightforward to identify), but multi-functional models are nevertheless interesting to consider (and notably deserve attention elsewhere).

Figure 3. This illustration reveals ultrastructural morphology exhibited by coronaviruses. Disease modelling involves creating a biological system in the lab that shows the same disease processes (Eckert, 2020).

The final type refers to models of data. Indeed, empirical observations sometimes provide evidence in the form of data points. Raw data is of course corrected, rectified, and developed. Models of data are thus formed from data points and their patterns. A hypothetical example might be the kind of model formed based on data of Venetian Sea levels (Sober, 2001). A certain ‘pattern’ identified in data on flooding might help predict when the next flood will occur, for example. The model is formulated via linear regression where a curve through the data presents an inclined straight line. There is no ‘physical modelling’ included here as such, but a model of data is the result (helping to predict future flooding) (Sober, 2001). The number of different types of models recognised has increased as the study of scientific models has grown in the philosophy of science. Questions and issues around model representation are therefore of particular interest in the philosophy of science.

Scientific Representation: The Problem

The problem in question is as follows: in virtue of what is a model a representation of something else? More formally, the problem is what fills the blank in:

M is a scientific representation of T iff (if and only if) ____________ where ‘M’ stands for ‘model’ and ‘T’ for ‘target system’ (Frigg and Nguyen, 2017).

There are some conditions of adequacy to consider briefly first, such as learning from models. Representation must be such that it allows to derive claims about the target system from the model. Learning about future flooding from a model of data on Venetian Sea levels, for instance, as mentioned (Sober, 2001). Moreover, an account of representation must allow for misrepresentation. Directionality is important too, for there is an essential directionality which has to be explained (Frigg and Hartmann, 2006).

Figure 4. An alternative, more colourful rendering of the Lorenz (1967) attractor (Ghys, 2013).

On top of maintaining various conditions of adequacy, now also consider what representation is not. Representation, crucially, is not a mirror image of a target system. This proposed definition is both wrong and misleading, for mirror images are alike whereas representations are not. There are instead lots of different representations for the same object which may warrant different inferences. Failure to take this into account can lead to serious mistakes since there are so many different kinds of representations in the sciences. Models of strangelets (i.e., hypothetical theorised cosmological objects composed of an exotic form of matter known as strange matter or quark matter) (Anissimov, 2023), for example, include the Liquid Drop Model and Shell Model (Madsen, 1994). The two are vastly different to one another and showcase why models should not be understood as mirror images. Representation does not imply ‘mirror image’, just like science is not a copy of the world. Hence this is what representation is not. But what is it? Various accounts of representation (by scientific models) are discussed next.

Scientific Representation: Similarity and Isomorphism Accounts

To reiterate, representations are not mirror images. Some accounts, however, hold that similarity and representation initially appear to be two closely related concepts (Frigg and Nguyen, 2017). Interestingly, this idea of similarity to ground representation even has a philosophical lineage stretching back—at least—as far as Plato’s The Republic (Allen, 2006). There are numerous versions of the similarity account of representation such as:

  1. A model M is a scientific representation of a target system T iff M and T are similar.

  2. M is a scientific representation of T iff a model user provides a theoretical hypothesis H specifying that M and T are similar to one another in relevant respects and to relevant degrees (Giere, 2004).

Clearly, account (2) develops (1). Overall, similarity accounts work by exploiting similarities between a model and that aspect of the world it is being used to represent (Giere, 2004). The doubt with account (1) is of course that mere similarity is not enough to ground representation. Indeed, everything is similar to everything else because any two items share some property. Assume now that these problems could be solved somehow, for example by narrowing down ‘allowable’ kinds of similarity. Then, recall the conditions of adequacy: learning from models, misrepresentation, and directionality. The learning condition is firstly met here because if one understands that M is similar to T, and M has a certain property P, then one can infer that T has a similar property. Account (1), however, does not meet the misrepresentation condition. If something misrepresents then it fails to be similar, yet something that is not similar is not a representation at all according to (1). The third and final condition of adequacy is also unmet since similarity is symmetrical: if A is similar to B, then B is similar to A. Representation is not symmetrical though: if M represents T, then T does not represent M (usually). Hence why (1) fails to explain the directionality of representation. Hence also why many argue for a more developed account like (2) with the inclusion of an intentional agent.

Figure 5. Two Graphs - Isomorphic examples (2021).

Ronald Giere’s (2004) similarity account of representation (account 2) rethinks a similarity account like that of (1) since a model user provides H specifying that M and T are similar to one another in relevant respects and to relevant degrees. Giere’s account is notably prominent amongst similarity accounts of representation. More recently, Giere (2010) has sought to defend the similarity account by explicitly invoking the role played by scientists—model users—using a scientific model (Toon, 2012). Appealing to agents and their representational capacities offers a promising way to defend the similarity account. Giere (2004), interestingly, proposes a shift away from a traditional focus on ‘representation’ to ‘the activity of representing’. As Adam Toon (2012) puts it:

“S uses X to represent W for purposes P, where S may be an individual scientist, a scientific group, or a larger scientific community, W is an aspect of the real world, and X is a representational device. While X might be a diagram, graph, or some other form of representational device, it is models that are primary (though by no means the only) representational tools in the sciences.” (p.246)

Giere’s proposal is therefore that models do not represent ‘on their own’ so to speak, but only because of what scientists do with them. Likewise, in assessing individual cases, one must ask not ‘is this object a model-representation?’ but ‘is this object-used-in-this-particular-way a model representation?’ (Toon, 2012). Giere, overall, thus offers an account which stresses the way in which scientists exploit similarities between models and the world. This use of an intentional agent (the scientist) is therefore a development of account (1) and overcomes problems concerning mere similarity. Consider how Giere (2004) introduces his account:

“…I am not saying that the model itself represents an aspect of the world because it is similar to that aspect. There is no such representational relationship. Anything is similar to anything else in countless respects, but not anything represents anything else. It is not the model that is doing the representing; it is the scientist using the model who is doing the representing.” (pp.747-8)

Mere similarity doesn’t prove to be such a problem like with account (1). Especially since scientists pick out specific features of the model that they can claim are similar to features of the designated target system to some degree of fit (Giere, 2004). Indeed, Giere calls statements specifying the similarities between model and system theoretical hypotheses (Toon, 2012). For the Newtonian model of the solar system, for instance, the theoretical hypotheses concern the positions and velocities of the earth and moon in the earth-moon system which are very close to those of a two-particle Newtonian model with an inverse square central force (Giere, 2004). Hence why a model does not simply represent T because it is similar to T—like account (1) suggests—but scientists instead use the model to represent the system by exploiting similarities via theoretical hypotheses.

Figure 6. Newtonian model of the Solar System (Telescope, 1812).

It is beyond the scope of this article to investigate whether an account like Giere’s entirely overcomes the problems that other (oftentimes simpler) similarity accounts of representation encounter, though account (2) is less naïve than that of (1). Moreover, it is not necessarily true that all forms of scientific representation involve similarity between M and T. One might even adopt a different kind of account because of this, namely a structuralist account of representation. Though related to similarity accounts like above, the structuralist account proposes ‘similarity’ somewhat differently. One might decide to view the structuralist account either as (a) a special version of the similarity account (but similarity with respect to structure), or (b) an independent account. First consider what ‘structure’ refers to (where S is the structure, D is the domain of objects, and R stands for ‘relations’):

S = < D, R >

Notably, different objects can have the same structure. Very loosely then, if two objects share the same structure then they are isomorphic. This very idea may be used to define an account of representation, which is that M is a scientific representation of T iff M and T are isomorphic (Frigg, 2002). Indeed, the objects that serve as models belong to different ontological kinds and are often set-theoretic structures (Frigg and Hartmann, 2006), adding somewhat to the desirability and plausibility of the account. Regardless of ontology, however, the 'isomorphic' account holds that it is a (shared) structure (between M and T) that may account for representation. Hence this is why an isomorphism account is rather like a similarity account of representation. Perhaps similarity in general is irrelevant when a model represents a target system, though?

Scientific Representation: Representation-as by Goodman and Elgin

This article now finishes by discussing a final—and quite promising—account of how to think about the representational relationship between models and the world. This ‘representation-as’ is an account that emerges from the work of Nelson Goodman and Catherine Z. Elgin (Nguyen and Frigg, 2017). On Goodman and Elgin’s account, one can think of representation much like how Margaret Thatcher is represented as a sand timer in her caricature on The Economist cover (in figure 7 below). Indeed, scientific models represent—very roughly—in the same way. Figure 8 below, the Kendrew Model of myoglobin, is yet another example of this kind of representation whereby myoglobin is represented as a plasticine type structure on sticks. Consider the notation below before introducing Elgin’s (2009) definition of representation:

X – the object that does the representing (for instance, the caricature drawing of Margaret Thatcher)

Y – the real-world target of the representation (Margaret Thatcher herself in this instance)

Z – the kind of a representation (a sand timer in this instance).

Elgin’s (2009) definition is then as follows: when X represents Y as Z, it is because X is a Z representation that denotes Y as it does. X does not merely denote Y and happen to be a Z representation. Rather in being a Z representation, X exemplifies certain properties and imputes those properties or related ones to Y.

Figure 7. Margaret Thatcher caricature, rather like a model representation of the UK Prime Minister (Kallaugher, 2017).

To discuss further the representational relationship between models and their targets being one of representation-as requires adding specificity to the various definitions above, namely denotation and exemplification. First, denotation (broadly speaking) is a two-place relation between a symbol and the object to which it applies (Nguyen and Frigg, 2017). ‘NASA’, for example, stands for The National Aeronautics and Space Administration and denotes the U.S. federal government responsible for the civil space program, aeronautics research, and space research (Bilstein, 1996). Pictures, equations, charts, and graphs (the list could go on) indeed are representations of things they denote. On this note, Goodman (1976) claims that we are often misled by ordinary language into believing that something is a representation only if there is something in the world that it represents. Distinguish between (1) pictures of a unicorn and (2) unicorn pictures, for example. More generally, distinguish between (1) pictures of a Z and (2) Z-pictures. One does not imply the other, argues Goodman (1976). In sum, a picture of a Z denotes a Z, but without necessarily showing a Z. Also, a Z-picture shows a Z, but without necessarily denoting a Z. Hence, some Z-representations denote a Z and others do not. Likewise, some pictures of a Z are Z-pictures and others are not. Europe is an example of territory-representation, for instance, whereas a representation of a territory refers to mere objects.