Friday, December 19, 2014

Thoughts about the scientific method (II): goals, models and frameworks


Note: This is part (II). Part (I) can be found here.

One of the most confusing words in philosophical discussions of science is the word "theory". The general public usually does not know what this means, and even within the physics community there is no widespread agreement on its meaning. Compare the following phrases: quantum theory, density functional theory, Fermi liquid theory, BCS theory, big bang theory, theory of elasticity, gauge theory, quantum field theory. Any physicist would agree that the word "theory" is being used here with very different meanings. This makes the question of whether a given "theory" should be testable, or falsifiable, extremely muddy.

I will try my best to shine a little light into the mud (though I cannot guarantee the transmission or reflection of such light!). Let's consider three words that are related to "theory" but have more unambiguous meanings: goal, model, framework.

A goal, clearly, is something that one would like to understand. Some possible goals in Physics are "superconductivity", "nanomechanics", "molecular motors", "quark-gluon plasma", "photonics", and "quantum gravity". These are sometimes phrased as theories, e.g. "theory of superconductivity" but as this example shows, such phrases are very unclear and it's better to think of them as goals. One advantage of focusing on goals is that each one typically has an experimental and a theoretical side: for example an experimentalist may study quark-gluon plasma at an accelerator while a theorist could formulate a model to describe the transport properties of this curious state of matter. However, very often there is no symmetry between the experimental and theoretical sides. For example, photonics is largely an experimental effort to transmit, modulate, amplify and detect light. The theoretical basis for light-matter interactions, known as quantum electrodynamics, is very well known and verified, and one does not aim to test or improve it using photonics (but there is still room for theorists to work out the properties of light in specific media). The important point is that photonics is driven by its potential application in telecommunications, medicine, metrology and aviation, to name a few areas. As such, it is more experiment-driven than theory-driven.

Now let's move on to models. Some useful examples are the "nuclear shell model", "Hubbard model", "Yukawa model" and "dual resonance model". Each of these models is a specific attempt to understand particular physical phenomena. Respectively, the above models try to understand: the energy spectra of nuclei, the metal-insulator transition, the nature of strong interactions, and resonances in high energy scattering. In each case the model is rather precisely aimed at a goal and is somewhat successful in describing the system in question. Also, in each of these cases the model has obvious limitations: the shell model fails to explain multipole moments of nuclei, while the Yukawa model does not provide accurate numbers for scattering amplitudes. In fact, all the above models are "wrong", in the sense that they are all contradicted by definite experimental measurements. In some cases they have been superseded by better models that also do not work completely. For example the shell model was replaced by the collective model that won a Nobel prize in 1975 for Bohr, Mottelson and Rainwater, but it too has had limited success. Inspite of this, the models I've listed are all useful and continue to be used by scientists for various purposes. It is important to understand that though they were contradicted by specific experiments, the models were not thrown out.

I expect a layperson will be  surprised to learn that physicists use models that actually disagree with some experiments. On the other hand, physicists are aware that scientific models can have limited applicability. Though we routinely accept this, I sometimes wonder how honest we are being. It's not that we always decide in advance which model needs to apply to a given situation, rather we often test a theoretical model against experiment, find that it doesn't work, and then - instead of saying "oh this model is wrong" - we say it is not applicable to that experimental situation for some reason. In other words, the model works only when it works. This is not a perfect way to do things, but it's what we can do, and we are all used to doing it.

Finally some words about frameworks. These are less familiar to the general public than goals and models, because they are usually technical, even though they can embody profound physical concepts. A framework is not a model of a system, but a way of thinking about large classes of systems. A very beautiful example, familiar to physicists across many (but not all) areas, is the renormalisation group. This teaches us how to follow the evolution of any microscopic system over a change in the effective length scale. It introduces the notion of "fixed point", a universal behaviour to which a wide class of systems converges. This work originated in particle physics in 1954 and was developed by Kadanoff and then Ken Wilson (who got the 1982 Nobel prize) in the 1960's and 70's in the context of statistical systems. Wikipedia tells us:
 "The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics, fluid mechanics, cosmology and even nanotechnology.
There is a profound lesson here. A framework like the renormalisation group, on its own, does not make any predictions about any system. It doesn't even know what system we are talking about! In order to be converted into a testable idea it has to be incorporated into a model. When it was proposed in 1954, quarks had never been thought of and quantum chromodynamics  did not exist. However, when experimental circumstances prompted the possibility of a theory of quarks, the notion of renormalisation group was already around and could quickly be incorporated into the seminal work that won Gross, Politzer and Wilczek a Nobel prize in 2004.

Thus a framework is a theoretical concept built out of past experience in physical reality, that takes the theoretical understanding to a higher level and can then be applied in various contexts to situations that may not even have been anticipated during the original development. For example the 1954 work on the renormalisation group was developed for, and applied to, quantum electrodynamics, but had this enterprise failed to be tested, the framework would still have been lying around waiting for its applications to yield the 1982 and 2004 Nobel prizes. From the standpoint of theory, frameworks are incredibly valuable things, often more valuable and durable than models.

(to be continued)

1 comment:

Rahul Siddharthan said...

I guess I should wait for part 3 before leaving more comments, but can't resist quoting George Box: "All models are wrong, but some are useful."