Tuesday 6 December 2016

8th Swedish Meeting for Mathematical Biology

Next week on the 15th-16th Dec the Mathematical Sciences at Chalmers/GU is hosting the 8th Swedish Meeting for Mathematical Biology. The first meeting was held in 2009 organised by David Sumpter at Uppsala University, and this is the second time the meeting is held in Gothenburg (last time was 2010).

The purpose of the conference is to gather Swedish researchers who use mathematics in order to understand biological systems, e.g. in evolutionary biology, epidemiology, ecology and cancer research. The meeting spans two days and consists of two invited talks by Ivana Gudelj and Luigi Preziosi. The remaining time is allocated for contributed talks with a typical duration of 20 minutes. Among these we prioritise PhD-students and young researchers. In addition to the talks there is also a poster session.

For more information please have a look at our webpage. It still possible to register for the meeting. You can do so by sending me an email.


Monday 14 November 2016

The impact of anticipation in dynamical systems

We have just submitted a manuscript that investigates the role of prediction in models of collective behaviour. The idea is quite simple: take a model where animals attract/repel each other based on a pairwise potential, and adjust it so that the animals act not on current, but on future positions (including their own). These anticipated or predicted position are assumed to be simple linear extrapolations some time T into the future. In other words, instead of using current positions x to calculate forces, we use x+T*v, where v is the velocity.

This seemingly simple modification changes the dynamics dramatically. For a typical interaction potential (e.g. Morse potential) the case of no prediction yields no pattern formation, simply particles attracting and colliding. But for an intermediate range of T we observe rapid formation of a milling structure. This means that prediction induces pattern formation and stabilises the dynamics.

Abstract:
The flocking of animals is often modelled as a dynamical system, in which individuals are represented as particles whose interactions are determined by the current state of the system. Many animals, however, including humans, have predictive capabilities, and presumably base their behavioural decisions - at least partially - upon an anticipated state of their environment. We explore a minimal version of this idea in the context of particles that interact according to a pairwise potential. Anticipation enters the picture by calculating the interparticle forces from linear extrapolation of the positions some time $\tau$ into the future. Our analysis shows that for intermediate values of $\tau$ the particles rapidly form milling structures, induced by velocity alignment that emerges from the prediction. We also show that for $\tau > 0$, any dynamical system governed by an even potential becomes dissipative. These results suggest that anticipation could play an important role in collective behaviour, since it induces pattern formation and stabilises the dynamics of the system. 

arXiv: http://arxiv.org/abs/1611.03637


Thursday 13 October 2016

Copernicus was not right

During my parental leave I took the opportunity to learn more about areas that I normally don't have time to explore. One of these topics was history of science and in particular the changing world views that man has held throughout history.

Perhaps the largest shift in our view of the world happened when the geocentric world view was replaced by the heliocentric. Although some ancient philosophers argued for a heliocentric worldview (most notably Aristarchos of Samos), the general belief was that the earth was located at the centre of the universe and that the planets were carried round the earth on spheres, the outmost one holding the fixed stars. This framework was described in mathematical terms by Claudius Ptolemaeus in the 2nd century AD, in his astronomical work Almagest. Ptolemaeus constructed a mathematical model in which all planets orbited the earth on circles, and in addition each planet travelled on a smaller circle, an epicycle, along its trajectory around the earth. The Ptolemaic system could predict the future positions of the planets with good accuracy, and in addition harmonised well with the world view of Christianity. These two reasons contributed to the fact that the Ptolemaic system remained dominant for over a thousand years.

The first serious attack on it was delivered by Nicolaus Copernicus, who in the book  De revolutionibus orbium coelestium (1543) suggested a heliocentric system. The motivation for this was two-fold. Firstly, Copernicus did not like the fact that the ordering of the planets in the Ptolemaic systems was arbitrary and simply a convention (since both the distance to earth and the speed of the planet could be adjusted to fit the data there was in modern terms one free parameter in the solution), secondly he disapproved of Ptolemaeus use of an equant point in his system. The equant is the point from which the centre of the epicycle of each planet is perceived to move with a uniform angular speed.  In other words, to a hypothetical observer placed at the equant point, the center of the epicycle would appear to move at a steady angular speed. However, in order to account for the retrograde motion of planets Ptolemaeus had to place the equant point next to earth (not at the centre of the universe). This meant that although the Ptolemaic system was constructed from circular motion there was something asymmetric about it. In conclusion Copernicus critique was aesthetic in nature. It was not about having a good fit to the data, but an elegant model.

The point I want to make is that Copernicus was not driven by an urge to create a system that was more accurate at predicting planetary motion. In fact the initial heliocentric model made predictions that were on par with the Ptolemaic system. In addition Copernicus insisted that planetary orbits were circular (and he avoided the equant) and therefore he needed even more epicycles than the Ptolemaic system. Since the system was modified several times an exact number is difficult to come up with, but it is estimated that Copernicus initially used 48 epicycles.

This is in complete contrast with the folk science story that claims that the Ptolemaic system had to be amended with more and more epicycles in order to explain data on planetary motion. And along came Copernicus and fixed the problem and got rid all epicycles by proposing a heliocentric model.

No, Copernicus took a step in the right direction, but it was not until Johannes Kepler in 1609 discovered that planetary orbits are elliptical that epicycles could be discarded from the heliocentric model.

I'm not quite sure about the take home message of this post. But one thing that I've learnt is that the scientists that we most often associate with the scientific revolution (which by extension reduced the powers of the Church) were deeply devout and held metaphysical beliefs similar to those of Aristotle and Plato. For example Kepler was convinced that the radii of the planetary orbits could be explained by circumscribing Platonic solids within one another. And as we have seen above Copernicus thought that the equant point disturbed the circular symmetry and therefore suggested a model containing only circles.

So I guess my conclusion is this: Copernicus was not right, he was just less wrong. And I guess this applies to all scientists. We can never expect to be right, just less wrong than our predecessors.




Thursday 1 September 2016

Parameter variation or a take on interdisciplinary science

This text is written from a personal perspective and I'm not sure how well it applies to other scientists. If you agree or disagree please let me know.

A standard tenet of experimental science is that the number of parameters that one varies in an experimental set-up should be kept to a minimum. This makes is possible to disentangle the effects of different variables on the outcome of the experiment. It has been claimed that the pace at which physics has moved forward in the last century (and molecular biology in the last half century) is due to possibility of physicists to isolate phenomena in strict experimental set-ups. In such a setting each variable can be varied individually, while all others are kept constant. This is in stark contrast to e.g. sociology, where controlled experiments are much harder to perform.

In a sense the process of doing science is similar to an experiment with a number of parameters. The 'experiment' corresponds to a specific scientific question and  the 'parameters' correspond to different approaches to solving the problem. However, we do not know which approach will be successful. If not, it would not classify as research.

Most approaches or methods are in fact aggregates of many submethods. To give an example, say that I would like to describe some biological system using ordinary differential equations. Then the equations I write down might be novel, but I rely on established methods for solving these equations. I try to describe the system by trying (varying) the equations that describe the dynamics until I find the ones I'm happy with. In this sense we use both existing and new methods when trying to solve some scientific question. However, in order to actually make progress we often minimise the number of novel methods in our approach. If possible we only vary one method and keep all others fixed.

The problem with interdisciplinary research is that it often calls for novelty on the part of all the involved disciplines. In the case of mathematical biology for example we are asked to invent new mathematics at the same time as we discover new biology. Maybe this is not always the case, but to a certain extent these expectations are always present. A mathematician is expected to develop new mathematical tools, while a biologist is expected to discover new things about biology.

If both parts enter a project with the ambition of advancing their own discipline this might introduce too much uncertainty in the scientific work (we are now varying two "parameters" in the experiment), which could lead to little or no progress. If the mathematician stands back then new biology can be discovered using existing mathematical tools, while existing biological knowledge and data could serve as a testing ground for novel mathematics.

So what is the solution to this problem? I'm not sure. But being clear about your intentions in an interdisciplinary project is a good starting point. And maybe taking turns when it comes to novelty with an established collaborator.








Back from parental leave

As of the 15th Aug I'm back to science and teaching. It's been some great 9 months, but now it's time to get serious about work again. This autumn I'm looking forward to lecturing on mathematical modelling and learning more about cell migration and the extra-cellular matrix.

Scientific Models

In 2009 when I was a postdoc at Center for Models of Life at the Niels Bohr Institute my former MSc-supervisor Torbjörn Lundh came to visit me. As usual we had a great time together, but what I remember most from that visit was that we started talking about scientific models, and in particular how little is actually written (outside philosophy of science) about modelling. Then and there we wrote down an outline of a book that now 7 years on is published. A Swedish edition was in fact published in 2012, but now there's an English edition out on Springer.

Read more here and get your copy!


Thursday 10 March 2016

Travelling wave analysis of a mathematical model of glioblastoma growth

This paper has been on arxiv for a while (and the work dates back to 2011), but it was at last accepted for publication in Mathematical Biosciences after 1.5 years of review. The paper contains an analysis of a PDE-model of brain tumour growth that takes into account phenotypic switching between migratory and proliferative cell types. We derive an approximate analytic expression of the rate of spread of the tumour, and also show (and this is in my view the most intruiging result) that the inverse relationship between wave front steepness and its speed observed for the Fisher equation no longer holds when phenotypic switching is considered. By tuning the switching rates we can obtain steep fronts that move fast and vice versa.

Accepted version: http://arxiv.org/abs/1305.5036