https://www.youtube.com/watch?v=14SaROmQKCI&feature=emb_logo
How these systems can transit between different order states.
00:05
actually the first time i managed to set
00:07
up
00:07
everything in time but i started a half
00:11
half an hour earlier to set up the
00:13
webcam and everything
00:15
so we have a professional equipment here
00:16
so that's really a video conference
00:19
conferencing system and uh
00:22
but it has like 100 different cables
00:26
and a couple of devices only to be
00:28
connected in the right way
00:29
to make things work and but today i
00:33
managed
00:33
managed to set it up in time
00:36
okay so then let's start our lecture
00:40
today
00:41
and um so
00:44
before i go on i'd like to give you a
00:47
reminder
00:48
of actually uh what we did last time
00:52
uh because that's connected to what
00:54
we're doing today
00:56
let me just share the screen
01:08
so there we go ah
slide 1
01:12
works flawlessly perfect okay great
01:16
uh so just a little reminder of last uh
01:18
kind lecture
01:19
the last time we started thinking about
01:22
how order
01:23
emerges a non-equilibrium system
01:27
and the key insight from last in the
01:30
last lecture is that it's
01:32
many situation boils down to a balance
01:34
between
01:35
noise that create disorder
01:39
and the propagation of information about
01:43
interactions through the system
01:47
so in equilibrium conditions and the
01:49
thermal equilibrium this is formalized
01:52
by the entropy and the energy uh that
01:55
then give rise to the free energy that
01:57
we need to minimize
01:58
and then we minimize the free energy we
02:01
kind of
02:01
know what is happening yeah and when we
02:05
look at this equilibrium system mass
02:08
times are these pointers
02:09
also called xy model we saw that
02:13
actually in all dimensions smaller or
02:16
equal to two
02:18
fluctuations if we prepare the system in
02:21
a homogeneous
02:22
state where all pointers point of all
02:25
arrows point in the same direction
02:28
if we twist one arrow or if we perturb
02:31
the system
02:32
yeah then this destabilizes
02:36
the order on the large
02:39
scale yeah so we calculated how such
02:42
perturbations propagate through the
02:45
system that we saw that they build up
02:47
the the longer you go through the system
02:50
therefore this
02:51
distance r to infinity that we had last
02:53
time
02:54
this has formalized this idea that you
02:56
cannot build up a long range
02:58
order in uh equilibrium by the mermain
03:01
wagner theorem that basically says that
03:04
even if you had long range order it
03:06
doesn't cost
03:07
any energy to have a very slow
03:10
perturbation
03:11
of your spins and this very slow
03:14
perturbation
03:15
is called the gold stone mode you can
03:17
always get these gold stone modes
03:20
in equilibrium conditions if your
03:21
symmetry so if your
03:23
spin or your your microscopic degrees of
03:26
freedom
03:27
are continuous now then we went on to
03:31
equilibrium
03:32
conditions uh non-equilibrium systems
03:34
and we thought
03:35
you know so about what is now if these
03:38
arrows are moving
03:39
now they're not only pointing in a
03:41
direction but they're also moving in the
03:43
same direction
03:44
then information about
03:48
the alignment of uh
03:51
or the direction does not only spread
03:54
diffusively so very inefficiently
03:57
equilibrium system but it can spread
04:01
very quickly through flows convective
04:04
flows
04:05
through the entire system and thereby
04:08
give rise to long-range order
04:10
even four dimensions
04:13
smaller or equal to two now so
04:16
in non-equilibrium systems we have
04:18
possibilities to spread the information
04:21
about alignment or to suppress
04:24
fluctuation to suppress
04:25
noise that we don't have in equilibrium
04:28
conditions
04:29
and therefore we can get ordered states
04:32
and non-equilibrium systems
04:33
even if we cannot have them in very
04:36
similar
04:36
equilibrium systems
slide 2
04:40
so today i want to go
04:43
one step further further one step
04:46
further and one step back
04:47
actually um so before
04:50
let's let's take a step back before i
04:52
begin with today's lecture
04:54
uh just to see where we are in the
04:56
lecture so we just
04:57
now got the idea of how order
05:01
arises in non-equilibrium systems today
05:04
we'll
05:04
talk how these systems can
05:07
transit between different order states
05:10
so we'll be talking about bifurcations
05:13
and phase transitions
05:15
and this
05:18
lecture today in this lecture today we
05:20
will focus on
05:21
very large systems where this noise is
05:24
not
05:25
this noise term that we have it's psi is
05:28
not
05:28
important and in the next lectures
05:31
two or three or two lectures or so we
05:34
will
05:35
take into account what does noise do to
05:38
order states to do the transitions
05:40
between
05:40
faith and for this we will need methods
05:43
from renormalization group theory
05:45
that we will introduce here and then
05:49
we're done with most of the
05:52
theoretical physics aspects of this
05:54
lecture after that we'll
05:55
start taking a different approach and
05:58
ask how can we actually
05:59
see order in
06:03
big data sets somebody gives you a
06:05
terabyte of data how can you actually
06:07
identify these degrees of freedom
06:11
now if you have not only like a spin
06:13
system uh
06:14
one degree of freedom for each spin but
06:17
if you have 20
06:18
000 degrees of freedom how can you
06:19
actually see whether you have some kind
06:22
of collective
06:23
state in your system so that will so
06:26
we'll have two or three
06:27
lectures on what is actually data
06:29
science
06:30
and then at the end of the lecture at
06:32
the end of january we'll finish up by
06:34
putting it all together and see how we
06:36
can
06:36
switch between theory and data science
06:40
and back and how to actually generate
06:42
hypothesis
06:43
from the data sets that are currently
06:45
out there
06:47
okay so for the today's lecture i
06:50
uh want to now not ask can we have
06:54
order or can we not have order i want to
06:57
ask what kind of order do we have
07:00
and uh to this i
07:03
uh want to come back to this formalism
07:07
that allows us to characterize generally
07:11
larger systems that have spatial degrees
07:14
of freedom
07:15
now that are spatial results so we have
07:17
a field
07:19
or an order parameter it's called an
07:20
order parameter field phi
07:22
of x t now that gives us
07:26
uh so for each coordinate in space
07:29
this phi of x and t gives us
07:33
the value for example of some
07:35
concentration
07:37
or for example the number of infected
07:39
people
07:40
in dresden also yeah and then we
07:44
came up with something that is actually
07:46
standard in the literature
07:48
but it's just a very inconvenient way of
07:50
actually writing down
07:52
partial differential equations is uh
07:55
that we said okay we can get
07:58
basically a large class of systems of
08:00
dynamics
08:01
if we take some functional f that
08:04
depends on phi
08:06
and we take the derivative with respect
08:09
to y
08:10
now with this we can create here on the
08:12
left hand side
08:13
all kinds of terms that are functions of
08:17
fine
08:17
and of derivatives of sine
08:20
and then we have our noise terms
08:25
here and if we our old parameter for
08:28
something like a chemical reaction
08:30
our order parameter our concentrations
08:33
are not conserved
08:34
now so we can change we can actually
08:36
increase the total concentration of
08:38
something
08:39
then we have equations of the first kind
08:42
called
08:42
model a this classification scheme
08:46
and if we are just moving around
08:48
concentrations
08:50
which i was actually not
08:53
without actually not taking any
08:55
particles for example out of the system
08:59
then we have these conservative
09:02
these conservatives so-called
09:04
conservative systems
09:06
which are described by equations of this
09:08
kind and they're called
09:10
model e yeah and an example of this
09:14
functional f here is that we write this
09:17
as an
09:18
integral of some potential and
09:21
something that gives the diffusion term
09:24
once we take the derivative with respect
09:26
to the function of the derivative
09:28
with respect to phi now and this
09:31
potential here is also related then to a
09:35
local
09:35
force you know so that's like a formal
09:38
analogy
09:39
and of course this the writing of such
09:42
kind of
09:42
systems comes actually from equilibrium
09:45
statistical physics
09:46
where this f is actually some free
09:48
energy
09:50
functions from generalized free energy
09:52
for example the fight to the floor
09:53
theory ginsburg land over a theory
09:57
and then you have an equilibrium system
09:59
and you want to know
10:00
how this equilibrium system evolves and
10:03
then you just take the functional
10:04
derivatives with respect to your fields
10:07
and then you know how your equilibrium
10:09
system your free energy function
10:12
your free energy that describes your
10:14
theory gives rise
10:15
to dynamics of the field now that comes
10:19
from equilibrium but it's actually a
10:20
very inconvenient way for us to write
10:23
down
10:23
these kind of equations and uh
10:26
so i just wanted to tell you this
10:28
classification scheme
10:30
but in the following slides i write out
10:31
the equation directly
10:33
without going via dysfunctional
10:35
derivatives here
10:36
and as i said in this lecture we'll
10:39
first
10:40
look at transitions between
10:43
non-equilibrium states uh
10:46
in systems that where this noise this
10:49
psi is not important yeah so that's
10:52
noise for example is not important
10:54
if the temperature in the equilibrium
10:56
system the temperature
10:58
is zero or very small in a typical
11:01
biological system
11:03
noise typically is not important if you
11:06
have a very large
11:07
number of uh of uh
11:11
particles that contribute to a certain
11:13
process
11:14
yeah so we know first and the first step
11:17
we asked so
11:17
how can we go between different
11:20
non-equilibrium states
11:22
how can we switch to to say between
11:25
different kinds of order and
11:29
in the next step in the next lectures
11:30
we'll ask okay what does this noise do
11:33
and of course i wouldn't have a couple
11:36
of lectures
11:37
on this noise if it wouldn't do very
11:40
interesting things
11:42
but for now we ignore the noise you know
11:44
and uh what we'll do is also called
11:46
mean field theory just look at the
11:49
partial differential equations that are
11:52
drawn
11:53
that are driving these concentration
11:55
fields
11:56
phi of x t in the limit of a low
11:59
temperature or very high
12:01
particle numbers
slide 3
12:05
so
12:09
for the very first part of this lecture
12:12
i would like to even go to
12:14
even to an even simpler
12:17
framework and that is we don't even
12:20
consider
12:21
space now we say the system is well
12:24
mixed
12:25
yeah and if the system is well mixed
12:28
then
12:29
we can uh neglect spatial derivatives
12:33
you know because the system is in
12:35
homogeneous state yeah and then
12:37
so this is considered like a chemical
12:39
and we're always stirring the
12:41
these chemical reactions so that every
12:43
particle
12:44
every molecule very rapidly travels
12:48
through the entire system you know so
12:50
then we basically
12:52
have a homogeneous system where
12:54
concentrations do not depend on space
12:58
and if the concentrations do not depend
13:00
on space
13:01
then uh spatial derivatives become zero
13:05
yeah and these are the kind of systems
13:07
that we are looking at
13:09
formally again in this notation of
13:12
uh functional derivatives uh we can then
13:15
neglect
13:16
the spatial derivatives uh in this
13:18
functional here
13:20
and what we then get is a
13:23
differential equation of phi of t that
13:26
now does not depend on
13:28
space anymore and that is this time
13:30
evolution of this quantity
13:32
is just described by some function f
13:35
yeah and of course i could have just
13:37
started with this equation here without
13:38
the
13:39
function of derivatives and so on yeah
13:41
we just of course what we'll
13:43
be looking at for for the first part of
13:45
this lecture
13:46
are non-linear differential equation or
13:48
nonlinear
13:50
dynamics and i'll give you a brief
13:53
overview because it gives you
13:55
an insight not only on the
13:57
renormalization part
13:58
that we will do before christmas but
14:00
also to the second
14:02
part of this lecture we will look on how
14:04
spatial different spatial structures
14:06
can emerge so
14:10
we have these not with these non-linear
14:12
differential equations here so on the
14:14
left hand side we have the time
14:16
evolution of the scalar
14:18
yeah and on the right hand side we have
14:19
some function it's a nonlinear function
14:21
that describes
14:22
this time evolution and uh
14:26
of course a typical example of a
14:28
non-linear system is always in biology
14:30
in biology everything is nonlinear and
14:36
very simple system you can look at is
14:39
for example
14:40
how the gene
14:44
uh interacts with itself a
14:48
self-activation
14:50
of a gene yeah and
14:54
if you have a gene i mentioned this
14:57
already now so you have a gene here
15:02
now that's part of the dna and at the
15:05
beginning
15:06
of this gene also the dna is very long
15:08
we're now looking at a very
15:10
short part of the total dna
15:13
that's the gene here at the beginning
15:17
of this team there's a promoter the
15:20
promoter
15:20
turns on or off this g and when this
15:24
promoter
15:25
turns on the gene then this gene
15:28
produces molecules
15:30
now actually via multiple steps
15:33
but in the end you have something that's
15:36
called a protein
15:40
now of course i did this of course for
15:43
the ball it just of course it's more
15:44
complicated than that yeah and this
15:47
protein what does this protein do
15:49
no it can degrade
15:52
but it can also do fancy things so for
15:55
this protein is produced and it swims
15:57
around in the cell
15:59
you know and we have these proteins now
16:01
here multiple copies of this because
16:02
this
16:03
gene keeps producing proteins and now we
16:06
can say that this protein
16:07
also decides whether this gene is on or
16:10
off
16:11
and what is a typical situation is that
16:13
this protein then binds
16:17
to this promoter to the start site of
16:20
this
16:21
and only if we have two of them together
16:25
we can start the gene
16:28
now we can start the gene and
16:31
so that means we need to find pairs
16:35
between these genes here but between
16:37
these proteins here
16:38
and if we have found a pair it can bind
16:41
and then
16:42
this starts producing proteins from the
16:44
gene again
16:45
which then again couple
16:49
back to itself so it's a feedback and
16:53
uh typically the kind of equation that
16:56
you get from this
16:57
is from the
17:01
for the concentration of the numbers
17:04
of these proteins in the cell
17:08
is that you have one process that
17:10
describes
17:11
the activation of the gene itself
17:14
and this activation is non-linear so you
17:17
have many
17:18
different contributions y squared
17:22
divided by 1 plus y squared
17:25
this function here
17:28
is the activation part
17:33
it describes that you have to find pairs
17:36
of these genes now so that you look at
17:39
this here
17:40
the more pairs you have the more pairs
17:42
that's the number of pairs that you can
17:44
you can build now the more pairs you can
17:47
build
17:49
the more likely you express this g here
17:52
you
17:52
turn it on but then we also have the
17:55
situation
17:56
that if we have too many of these
17:58
there's a crowding effect on the dna
18:00
they can't all bind
18:02
at the same time now so they have to
18:04
compete for binding
18:06
so they can't if you have like a million
18:09
or like infinitely many of these copies
18:12
here
18:12
they all cannot bind simultaneously to
18:15
this region here because this is
18:17
only a finite amount of space yeah
18:20
and that's why we have another part here
18:24
that saturates you know that means that
18:26
we
18:29
that we for very high values of this y
18:32
of this protein concentrations we cannot
18:34
get any better
18:36
and this is called the hill function and
18:38
the hill function typically
18:40
looks like something like this
18:49
now it's clearly non-linear and then we
18:51
have the second term
18:53
now it describes that describes
18:56
degradation also and how how many
18:59
proteins or how many copies of these
19:01
proteins
19:02
we use at a given amount of time and
19:05
this is very simple because the more you
19:07
have the more you lose now so that's
19:10
just
19:10
minus y and this is just as
19:14
we'll look into detail into this
19:17
equation
19:17
later um and this is just an example of
19:21
how
19:21
in biological systems these non-linear
19:24
differential equations automatically
19:26
emerge
19:27
almost all the time already on the basic
19:31
building block of many biological
19:34
systems namely the expression
19:36
of a gene so
19:39
what can we not do with these nonlinear
19:42
equations
19:45
and i didn't oh i had i had a transition
19:48
here i wouldn't have okay so uh
19:52
so i had a fancy transition i wouldn't
19:54
have needed to to write that
slide 4
19:56
uh okay so let's move on what can we now
19:58
do
19:59
with such equations here so the
20:01
solutions
20:02
of these non-linear equations that live
20:05
in some
20:06
space in some configuration space
20:10
and in the space we move around you know
20:14
as the system evolves
20:16
now uh typically the the space of
20:20
all possible trajectories of all
20:22
possible solutions
20:23
such as the system is called face
20:25
portrait
20:26
and that we start
20:30
at some initial conditions and we
20:32
involve along
20:34
this trajectory here if we manage to
20:36
solve this equation
20:38
now there's some points that are
20:40
particularly important
20:42
in this field namely these are fixed
20:44
points
20:46
and these fixed points are points where
20:49
the time derivative
20:50
is zero so once you are in these fixed
20:53
points
20:54
you cannot get out of that out again
20:56
because the time derivative
20:59
is zero now you stay there these are
21:02
called fixed points
21:03
and once we know the fixed points we
21:06
know already
21:08
a lot about the dynamics of a nonlinear
21:11
system
21:13
so now here i have the transition so on
21:15
the bottom
21:16
you can see how you can understand the
21:18
dynamics of nonlinear system
21:21
just by graphical analysis
21:24
so now on this diagram at the bottom
21:28
on the y on the x axis is just the
21:31
concentration
21:32
why not just any system we're not just
21:35
looking at any general system i'll use
21:37
the y's for simplicity
21:39
for simplicity uh we have on the excess
21:41
if we have the concentration
21:42
y and on the
21:46
y axis we have what is ever is on the
21:49
right hand side
21:51
of our differential equation the times
21:53
the derivative
21:55
of y and now
21:58
we can of course plot this function we
22:00
can ask how does the right
22:02
hand side of our differential equation
22:04
depend
22:05
on the concentration y now and then we
22:08
get some function for example the ones
22:10
that i
22:10
plotted here and this function
22:15
will cross the zero line
22:18
now this function will be zero at
22:20
certain points
22:22
and wherever this function is zero this
22:25
is
22:25
where we have a fixed point now this is
22:28
where the
22:29
time derivative is 0
22:32
and then we can ask are these fixed
22:34
points stable
22:35
or are they unstable so once we're in
22:37
there do we stay there forever
22:40
or is it enough if i give a little kick
22:42
to get out again
22:44
so how stable are these fixed points
22:46
once we're in there
22:47
and that's also something you can very
22:50
easily
22:52
see for example if you look at this next
22:53
point here
22:55
uh this fixed point here that y
22:58
dot or the time derivative is zero but
23:01
if you go a little bit to the right
23:03
then the time derivative becomes
23:05
negative
23:07
yeah so we go back into this point if we
23:10
go a little bit to the left
23:12
that the time derivative becomes
23:13
positive and we also get pushed back
23:17
into this point so
23:20
this point here is stable so that means
23:23
if we go to the right we get pushed back
23:25
in and if we go to the left we also get
23:28
pushed back
23:29
by the time derivative yeah and this is
23:33
just because
23:34
um the slope
23:38
here is negative so the
23:41
slope of this function of this time
23:44
derivative
23:45
now which is of course the same as
23:48
this here of this function the slope of
23:52
this function at the fixed point
23:54
tells us something about the stability
23:57
here is a so this is a stable fixed
23:59
point
23:59
now we always go back here this is an
24:02
unstable fixed point
24:03
so we go to the right and then the time
24:06
derivative
24:07
is positive so we get even further to
24:09
the right
24:10
we go to the left time derivative is
24:13
negative
24:14
and we get even further to the left
24:17
yeah so these fixed points are very
24:19
important and the stability of these
24:21
fixed points
24:22
tells us where our system will evolve
24:26
so just graphically you can see that if
24:28
i start here
24:30
with my system yeah you can you can just
24:33
graphically see
24:34
that my the dynamics will go into the
24:37
stable fixed point
24:38
and stay there now there are situations
24:42
so i said these fixed points are very
24:44
important that characterize
24:46
their stability the stability of these
24:49
fixed points characterizes
24:51
where our system will go carry towards
24:54
the dynamics of the system
24:56
and now what happens if we change
25:00
parameters now if these if we change
25:05
uh parameters then the number
25:08
or the kind of fixed point the stability
25:11
of this fixed point
25:12
can change and this is if this happens
25:16
uh that the number of this fixed point
25:18
or the stability
25:19
changes then are we talking about a
25:22
bifurcation
25:23
called a bifurcation and uh what is this
25:28
parameter
25:29
now so we call it r from now on so
25:31
there's some parameter
25:33
that makes the number of fixed point
25:37
uh also this ability change that's what
25:39
we call a control parameter and
25:42
typically
25:43
it's related in many physical systems
25:45
related to
25:46
how far you're actually out of thermal
25:48
equilibrium
25:50
so this r could be for example be
25:53
the val the difference in temperatures
25:56
between two boundaries
25:57
of the system
26:01
so these are bifurcations and
26:05
these bifurcations can be classified i
26:08
will now have a look at a few examples
26:10
of these
26:10
bifurcations so uh
slide 5
26:15
the simplest bifurcations or one of the
26:18
simplest bifurcations you can get
26:20
is if you consider nonlinear equations
26:23
of this kind here now so this is
26:26
for each bifurcation i show you the
26:29
simplest
26:31
differential equation that gives rise to
26:34
such a bifurcation
26:35
and the simplest equation is also often
26:38
called a normal form
26:41
so let's have a look at this equation
26:45
now suppose that r is smaller than zero
26:48
so this equation here
26:49
so we plot the same thing as as on the
26:51
previous slide so on the
26:53
right hand on the y-axis we have the
26:56
time derivative
26:57
now which is just equal to whatever is
26:59
on the right-hand side
27:01
and on the x-axis we have our
27:03
concentrations
27:05
now if this r is negative now then we
27:08
just have a simple
27:09
parabola that is shifted that is shifted
27:12
down
27:13
now and if you have that we can do the
27:15
same argument as
27:16
previously so we are at this fixed point
27:19
we go to the right
27:20
and then the time derivative gets
27:22
negative so you would really push back
27:25
now into this fixed point so this fixed
27:27
point is stable
27:29
and then on the right hand side we have
27:30
another fixed point which is
27:32
unstable in the middle
27:36
if r is exactly equal to zero
27:39
then we have a parabola well it's not
27:42
hard to see on this parabola we have a
27:45
weird fixed point here at the bottom
27:47
now we're not really sure whether it's
27:49
unstable or not
27:50
it's at the just at the boundary between
27:53
stable and unstable
27:55
so we go here it's stable from the right
27:57
hand side
27:58
and unstable from the left hand side the
28:01
typical notation is
28:02
that filled circles of this
28:06
are denotes stable fixed points and
28:08
these open
28:10
or white circles denote
28:13
unstable fixed points and then here the
28:15
idea is that this
28:16
fixed point is stable from the left and
28:18
unstable
28:19
to the right now and then we set r to
28:22
positive values
28:23
then we don't have any fixed point at
28:25
all and our system
28:27
will just go to infinity now the time
28:30
derivative
28:31
is always positive it will just go to
28:33
infinity and there are no fixed points
28:36
what we can now do is we can plot
28:40
the location of these fixed points that
28:44
i've given you now for three specific
28:46
values
28:47
of this r this parameter r
28:50
we can plot the location of these fixed
28:52
points
28:53
continuously as a function of r and
28:56
that's
28:57
depicted in the so-called bifurcation
28:59
diagram
29:00
now and i'm showing you here the
29:02
bifurcation diagram
29:04
of non-linear differential equation that
29:06
you see on top here
29:08
and in this case you can see
29:12
that the fixed points location of fixed
29:14
points were here
29:15
we have a stable fixed point negative
29:18
values
29:19
and then an unstable fixed point at
29:21
positive values
29:23
and then as we go increase our values of
29:26
r these two fixed points merge
29:29
and we end up on the right hand side
29:31
with a state
29:32
where we don't have any fixed point at
29:34
all and we just go
29:36
to a very high values of the
29:39
concentration
29:40
of y and so this is how to read these
29:43
bifurcation diagrams
slide 6
29:45
and just to give you an example just to
29:49
come back
29:50
to the self-activation of this gene now
29:53
this example
29:54
here we have the nonlinear differential
29:57
equation again
29:58
protein concentrations
30:01
activation term this non-linear term
30:05
that's actually something you can
30:06
calculate in a longer calculation
30:08
but it's already clear from the from
30:11
these pictures that there's something
30:12
non-linear
30:13
coming up and the degradation term
30:16
you know and then we plot both terms
30:19
separately
30:20
and we get something like this here this
30:22
is the activation term
30:24
this is the degradation term and uh
30:27
if we sum them up we get something
30:30
that looks like what we have here in the
30:32
middle
30:33
you know where we then is basically the
30:36
same plot i showed you in the last slide
30:38
we plot the right-hand side as a
30:40
function of y
30:41
and then we see that for small values
30:45
of r here we have just one fixed point
30:50
and as we increase r this
30:53
non-linear term will become more and
30:55
more important
30:57
now and we start
31:00
having intersections with the x-axis so
31:04
with zero
31:06
and if we have a very large and strong
31:08
activation
31:09
very strong feedback on itself we have
31:12
multiple fixed points here
31:15
the right-hand side is the bifurcation
31:17
diagram
31:18
this bifurcation diagram so here's the
31:22
fixed point and uh this bifurcation
31:25
diagram looks
31:26
as as follows yeah so as you go have
31:29
very
31:29
for very low values now it's just a
31:33
translation of these red
31:34
points here uh from from the previous
31:38
picture
31:38
for very low values you have a stable
31:40
fixed point here
31:42
at zero and then suddenly this
31:45
non-linearity or this
31:47
sigmoidal curve kicks in becomes
31:50
important
31:51
and you start intersecting with zero
31:54
yeah and that happens here at this
31:57
what's called a bifurcation point
31:59
and then this bifurcate and if you go
32:01
past this bifurcation point
32:03
you go to a state you have a stable
32:05
state here
32:07
and an unstable state here and a stable
32:10
state at the bottom
32:12
just remains there all the time at zero
32:17
so this is an example of a saddle node
32:19
bifurcation so it has this typical
32:21
signature here of the saddle node
32:23
bifurcation a little bit more
32:24
complicated
32:25
yeah but what you see here is what
32:28
happens
32:29
if you turn on the self activation if
32:32
you turn on the non-linear term
32:35
you go to a regime you go through this
32:37
bifurcation
32:38
here where your system has two stable
32:42
fixed points you know two stable fixed
32:45
points
32:46
are here and here and if you have two
32:50
stable fixed points that are separated
32:52
by an eight unstable one then you have a
32:54
switch
32:55
so this is just an example of how
32:59
biological systems in this type of gene
33:02
can make use of this nor these
33:04
non-linear effects
33:06
that they get for example here by having
33:09
clusters or little pairs you're
33:11
requiring to have pairs
33:12
of proteins to bind to the starting
33:14
region of the gene
33:15
how they can make use of this nonlinear
33:17
in fact
33:19
to in this case build a switch
33:22
and with this switch if you have a
33:24
switch you can it's like a bit
33:26
you can actually store memory in a
33:29
stable way
33:30
and uh this is one of the simplest ways
33:33
that
33:34
uh biological systems or that cell can
33:36
store
33:37
information okay so let's go on i'll
slide 7
33:40
just show you some other
33:42
bifurcations now so we have here
33:45
certainly different differential
33:46
equation now with
33:48
r times y plus y to the
33:52
power of 3 and then if you just look at
33:55
the right hand side and you just do this
33:57
graphical analysis
33:58
you will see that for small values of r
34:00
you have a single fixed point
34:02
single intersection and then if you look
34:05
at the slope
34:07
now you can see that this is actually
34:09
stable as you go to the right
34:11
and the derivative becomes negative so
34:13
you get
34:14
pushed back you go to the left and the
34:16
derivative becomes positive it always
34:18
pushes you in the opposite direction
34:20
so one stable fixed point so
34:23
if r is exactly equal to zero you're
34:26
still stable but you're in this real
34:28
state
34:29
where you have um a flat
34:33
as your function goes to tangential
34:37
to the zero axis to the y-axis
34:40
and um that oh sorry
34:45
i activate it and that reminds us a
34:48
little bit
34:49
to second order phase transitions you
34:52
know so that you get
34:53
tangentials because you're in a state
34:54
where you whether you don't really know
34:56
where they should go left or right
34:59
now so that this function is flat you
35:01
can go
35:02
you can go left and right but uh it's
35:05
not really punished
35:06
so you can you can you can go left here
35:09
but because this function is rather flat
35:11
now you can
35:12
stay there for a long time you go right
35:14
and the function is very flat
35:16
now it's tangential you can also stay
35:18
there
35:19
and that's probably what you know from
35:21
the potential
35:22
in the isaac mode for example second
35:25
order phase transitions
35:26
where also at the critical point the
35:29
potential becomes flat
35:30
and then fluctuations to the left and to
35:33
the right and spins
35:35
are not punished anymore and you get
35:36
these long range
35:38
correlations in in the fluctuations and
35:41
all these weird effects of criticality
35:44
now if you increase r further
35:48
then you get something like this here
35:50
you've got two fixed points
35:55
you get two fixed points two stable ones
35:57
and an unstable one
35:58
in the middle and the bifurcation
36:00
diagram looks like this here
36:02
uh so for low values of r you have
36:05
one stable fixed point at zero
36:09
and then as r increases beyond
36:12
uh critical value beyond r equals zero
36:16
you get this branching into two stable
36:19
states
36:20
that are separated by an unstable state
36:23
and if you now compare again to the
36:25
ising model
36:27
uh this is exactly how the magnetization
36:29
looks like
36:30
as a function of the temperature of the
36:33
inverse temperature
36:34
this looks like an icing mold where you
36:36
lower the temperature
36:42
so this was a so-called super critical
36:44
pitch for
36:45
bifurcation and uh there's a super
36:48
critical pitch book bifurcation that's
36:50
also a subcritical
36:52
pitchfork bifurcation now that looks um
slide 8
36:55
like this here and there is an error in
36:58
this
36:59
formula let me just check
37:06
um
37:08
here sorry
37:11
there's a minus sign
37:15
it should be minus here
37:19
now for this to make sense for these uh
37:21
cross made sense needs to be a minus
37:23
and now we have the same thing but with
37:25
a plus
37:27
yeah and if we have this plus then we
37:29
just turn around
37:30
the diagrams that we have the previous
37:33
slide
37:34
so for low values for negative values of
37:38
control parameter r uh we get three
37:40
fixed points
37:42
one stable fixed point in the middle and
37:44
two unstable fixed point
37:46
points at boundaries as we increase
37:49
r we have one unstable fixed point
37:53
at uh y equals zero and
37:56
uh this fixed point stays unstable
38:00
as we increase the value of r
38:04
so here's the bifurcation diagram
38:07
so we start with low values of r where
38:09
you have a stable
38:11
fixed point at the concentration zero
38:15
and two unstable fixed points
38:19
around that so because they're unstable
38:22
you have to go this way
38:24
as well and then as you increase
38:28
this value of r you go to a state
38:31
where your fixed points the stable fixed
38:34
point you've been in
38:35
suddenly becomes unstable yeah and
38:40
what you have here is now that if you go
38:43
here so here you stay at zero
38:45
you stay at zero all the time and then
38:48
you go to this state and then you don't
38:50
go to something small
38:51
but you immediately to go go to
38:53
something very large to infinity if
38:56
there's no other thing that
38:57
stops you from doing that and that's a
39:00
sub critical bifurcation
39:01
where it's because you have this uh this
39:04
uh
39:06
this discontinuity in the state of your
39:08
system so here it was zero
39:11
and suddenly it becomes very something
39:12
very large
39:14
if you compare that to a supercritical
39:16
bifurcation
39:18
our state was zero for small values of r
39:21
and then continuously increased so if
39:24
this was a
39:25
was resembling a second order phase
39:27
position
39:28
now this is resembling a first order
39:30
phase transition
39:31
right in the isaac model for example if
39:34
you change the
39:35
magnetic fields at low temperatures
39:39
so so what's so i'm making this
39:42
correspondence to
39:44
ising models and uh equilibrium systems
39:47
and phase transitions here
39:49
so what's the difference between a
39:50
bifurcation and a face
39:52
uh transition so biovocation
39:56
bifurcations actually resemble phase
39:58
transitions
39:59
in specific cases namely when
40:05
our when this here is actually a free
40:09
energy yeah and that's
40:10
that's that's the beautiful thing about
40:12
this writing uh these nonlinear
40:14
equations in this
40:15
specific form yeah so if this here is
40:18
actually a free energy
40:20
like the vinsmoke lambda or free energy
40:22
function for example that describes
40:23
things like the ising model
40:25
then the bifurcations correspond our
40:29
generalization of phase transitions
40:33
now you have many bifurcations
40:36
mainly possible bifurcations including
40:40
bifurcations that have
40:42
imaginary components so that give rise
40:45
to
40:45
imaginary components then that means
40:48
that you have
40:49
oscillations in time so that's that's
40:52
also something that
40:53
that you can have in these bifurcations
40:55
and but we'll not be dealing with that
40:57
i'll show you one more kind of
40:59
bifurcation and that's
41:01
a trans critical modification i'm
41:03
showing you that because it's relevant
41:04
for epidemics
41:06
and for the things that we'll be doing
41:07
before christmas
41:09
yeah and because we're now probably
41:11
going into lockdown
41:13
sooner sooner than later in jason and
41:15
also have
41:16
spent the last rest of the time for
41:18
christmas working on
41:20
epidemic models i'll explain to you
41:22
renormalization on
41:23
epidemic models yeah and this is an
41:26
example of an epidemic model i'll show
41:28
you in the next slide
41:29
why this is the case this is just again
41:31
now the simplest equation that gives you
41:34
this kind of behavior so r times y minus
41:37
y squared but now you do the usual
41:40
graphical analysis
41:42
and what you see is that you have this
41:44
inverted
41:45
parabola and if r is smaller than zero
41:49
you're gonna have something like this
41:50
here
41:51
and if you increase r yeah
41:54
then you move uh to a single fixed point
41:58
and then you go to a stable fixed point
42:01
at positive values of r
42:04
so and this is the bifurcation diagram
42:06
here at the bottom
42:08
i'll show you how to in the next slide
42:10
i'll give you an example
42:11
we have a stable branch for low values
42:14
of r
42:15
and then you go and an unstable branch
42:18
here for negative values of
42:19
y because stable at zero at negative
42:22
and unstable negative values and then
42:25
you flip
42:25
things around and the unstable branch
42:29
the zero point becomes unstable and
42:33
this diagonal line here this linear
42:36
state
42:36
becomes stable that's called a
42:38
trans-critical bifurcation and you just
42:41
flip things around basically
42:44
now let's have a little look at such an
42:47
example of a trans-critical
42:49
bifurcation now so suppose
slide 10
42:53
you have a disease now let's not give it
42:56
a name
42:57
so last year last year i gave a lecture
43:00
and i introduced disease models a few
43:03
series of disease models
43:05
and it was february last year and
43:08
these disease models at this point i
43:11
called it the rouhan
43:13
virus because at this point of the wuhan
43:16
model because at this point
43:17
the pandemic was restricted to this one
43:19
city in china
43:21
but now it's a little bit more general
43:24
and that's now we call it i don't know
43:26
the world iris or whatever now so this
43:30
model looks very similar simple
43:32
now so you have two kinds of people and
43:34
also the the infected ones
43:36
and the susceptible ones not infected
43:39
ones
43:40
they carry the disease they carry the
43:41
virus and the susceptible ones
43:44
they are healthy but they can catch the
43:48
virus
43:50
it's the simplest disease model you can
43:52
think about it's called also called the
43:53
because you have these two uh two
43:56
letters called the s
43:57
i model or contact process now
44:01
we can write down some simple chemical
44:04
reactions
44:05
some pseudo chemical so if an
44:08
infected person meets a susceptible
44:10
person or a healthy person
44:12
then with a rate lambda the the
44:15
susceptible person
44:16
turns into another infected person
44:20
and we have two infected persons at the
44:23
end of this
44:24
reaction yeah and then
44:28
the second thing that can happen is that
44:29
an affected person at some point
44:31
recovers
44:32
you know and if you recover you turn an
44:35
infected person
44:37
back to a susceptible one now that's the
44:40
simplest thing you can imagine in terms
44:42
of disease spreading
44:43
and now we can have a simple look at uh
44:46
how we
44:47
understand the non-linear dynamics of
44:50
the system
44:54
so first we just write down differential
44:57
equations
44:58
what is the time derivative
45:02
of the concentration of these i people
45:06
now we can write down this time
45:07
derivative so this the number of
45:09
infected people
45:11
increases with the rate lambda
45:14
and this rate of increase is a
45:17
proportional to the probability that is
45:19
a susceptible person meets
45:22
gets in touch with an infected person
45:25
and the more affected and the more
45:27
susceptible people we have
45:29
the higher is the spreading rate so this
45:32
is
45:32
proportional to s times i
45:36
and then an infected person can
45:39
turn back into a susceptible one
45:43
so that means we have minus s i
45:47
minus mu i
45:50
we can write down a similar equation for
45:52
the susceptible people
45:54
d over dts and that's just the reverse
45:58
now so the the negative of this so
46:01
we lose so it's a infected people by
46:05
infections number times as i times
46:08
s times i and plus
46:12
whenever an infected people uh
46:15
recovers
46:19
we get another susceptible one
46:22
and uh what we also say is that's uh
46:25
such a simplification this is an
46:28
important simplification
46:29
is that the total number of people
46:33
stays constant like we say this is a
46:37
total concentration of both gas content
46:39
so this hectic turns into susceptible a
46:42
susceptible tends to affect it
46:43
but actually people don't die from the
46:46
disease
46:47
so the number of people that we have
46:50
remains constant
46:52
now we can plug this condition in
46:56
then we get d over dti
46:59
is equal to lambda i i minus 1
47:03
now we just plug this in minus
47:08
ui and then we can get the fixed points
47:12
by just setting this to zero
47:15
the fixed points are given by i
47:18
times lambda 1 minus i
47:23
minus mu is equal
47:27
to zero so this is not the imaginary i
47:29
of course that is just the infected
47:32
and uh so this is the condition if we
47:34
set the left-hand side of these
47:35
equations to zero
47:37
and then we get a condition for the
47:38
fixed point and then we can solve this
47:41
and say okay i one is zero
47:45
the first pixel fixed point is at zero
47:47
so we can solve this equation by setting
47:49
i to zero
47:50
we can solve this equation also by
47:53
setting
47:54
i to lambda minus
47:57
u over lambda now that's another
48:00
solution
48:02
which is equal to one minus mu over
48:06
lambda this tells us already that this
48:09
mu over lambda is something important
48:11
the ratio between the time scales the
48:13
rates of these processes
48:15
is something important because it pops
48:17
up here
48:18
in the fixed points as a ratio
48:22
and now now we say if you evaluate now
48:25
the right hand side
48:26
of this equation at the fixed point to
48:28
get the stability
48:30
yeah so the time derivative with of
48:33
this right-hand side that's called f
48:36
and that is just given by lambda
48:40
1 minus 2i minus
48:43
mu and
48:46
now we evaluate this time derivative
48:49
this derivative
48:50
at the fixed point so the first fixed
48:53
point
48:54
is zero
48:57
and at this fixed point we have lambda
49:01
minus mu where we plug that in and the
49:03
second fixed point is
49:05
f prime of 1 minus nu over lambda
49:09
and then we have that this is
49:13
u minus number
49:16
so this looks a little bit symmetrical
49:18
right and this reminds of
49:20
us of the this transcritical bifurcation
49:23
that we had
49:24
and uh if we plot things then we see
49:28
that that's actually what's happening
49:30
now we plot the fixed points
49:31
now this is ice sorry
49:35
i star as a function
49:39
of mu over lambda
49:42
all right so then we have the staple
49:44
fixed point
49:46
so we see here that there's that the
49:48
signs of this fixed point
49:51
whether they're stable or not that
49:53
depends
49:54
on whether this what is the whether mu
49:58
is larger than lambda or not
50:01
so something is happening here at one
50:04
and now we plot this fixed point so one
50:06
is at zero
50:07
and for low values if uh
50:11
if lambda is larger than mu yeah
50:14
then this fixed point here is unstable
50:19
that's the dashed line and the other
50:22
fixed point
50:23
just has the opposite stability it's
50:26
stable
50:27
goes like this and then
50:31
if at that lambda here at mu over lambda
50:35
equals to one we have this change where
50:38
now
50:40
this zero fixed point this one here
50:44
becomes stable and the other fixed point
50:49
becomes unstable so this simple disease
50:52
model shows a transcritical
50:54
bifurcation and if we now take into
50:58
account fluctuations that we will
51:00
do that before christmas we'll see that
51:02
this is actually
51:05
that this model is actually one of the
51:08
fundamental model
51:09
to understand criticality and
51:12
non-equilibrium systems so this
51:14
simple model describes the large class
51:16
of
51:18
critical behaviors in non-equilibrium
51:20
system and we'll see that in the
51:22
following lectures
51:24
so this was just uh briefly a discussion
51:27
of what can happen
51:29
if homogeneous states change if you
51:32
don't have states
51:34
and uh so that was something that they
51:37
basically the foundations of nonlinear
51:39
dynamics
slide 11
51:40
many of you will already have heard of
51:41
that and
51:43
of course non-equilibrium systems
51:46
have this capacity that they're able to
51:49
produce
51:50
very complex uh structures so if you
51:52
think you're just in space i first think
51:54
for example
51:54
about biological system think about a
51:57
cell and all of this stuff that is
51:58
highly organized in the cell
52:01
yeah so in this second part of this
52:04
lecture we will now want to understand
52:06
if we not only have transitions between
52:08
homogeneous states
52:09
but can we also have transitions between
52:11
homogeneous states
52:13
yeah so where that have no spatial
52:15
structure where all for example
52:17
all arrows or all spins point in the
52:20
same direction
52:21
and states where we actually have um
52:24
a spatial pattern or a spatial structure
52:28
and a nice example so one of my favorite
52:31
examples
52:32
is actually you can see here on the
52:33
surface of jupiter
52:35
and you can see
52:38
now a satellite image of jupiter here
52:41
and what you see is that you have here
52:45
these stripes
52:47
on the surface of jupiter now you have
52:49
stripes
52:50
of different color of different kinds
52:53
and what's actually happening here is
52:55
that you have
52:56
a balance between uh convective
53:00
processes so
53:02
so gas that is that comes from a jupiter
53:05
from the
53:06
core of jupiter and that rises to the
53:07
surface and then goes back
53:10
and you have shear flow also where these
53:13
actually if you look at jupiter as a
53:14
movie
53:15
after that you will see that some of
53:18
these drives travel in the left
53:19
direction
53:20
and others travel in the right direction
53:22
it's very it's very very cool actually
53:24
and the reason for this is that it is a
53:27
non-equilibrium system
53:29
and once that one that fits very well
53:32
into our definition that we had in the
53:34
first lecture
53:35
namely the system is coupled to
53:37
different bars
53:38
and so this jupiter is hot inside
53:43
and cold outside so on the outside and
53:46
we have space
53:47
space and that's very cold and inside
53:49
jupiter is very hot
53:50
yeah and if you do that you have
53:52
something hot and something cold
53:55
now you know that frog maybe from your
53:56
room then you get conductive flow so the
53:59
air goes up
54:00
cools down goes up cools down
54:04
gets heated up cools down and so on you
54:07
get these convective flows
54:08
and that generates these patterns on
54:12
jupiter
54:13
and the origin of these patterns of this
54:15
conductive flow
54:16
is that you have this incompatible bath
54:19
the cold bath
54:20
or the cold boundary or the hot boundary
54:23
at the bottom
54:24
and the code boundary at the top and
54:26
that gives rise to
54:28
spatial and dynamical structures that
54:30
look very interesting
slide 12
54:35
so now we go back to our little
54:38
uh a little general functional
54:41
definition of spatial
54:43
uh launch voice system again we look uh
54:46
we ignore the noise again
54:48
and again also if you if you're not
54:50
familiar if you're not very happy with
54:52
these functional
54:54
derivatives uh i always write down the
54:56
specific
54:57
equations that we're actually studying
54:59
at the following but this was the
55:00
general framework that we studied and
55:03
that we introduced
55:04
that incorporates both the conservative
55:06
and the non-conservative models the
55:08
model a
55:08
and b and we suppose that there's some
55:11
parameter
55:13
r here
55:16
that describes how our system goes out
55:20
of equilibrium
55:21
also that typically describes
55:24
a transition a control parameter that
55:27
was previously bifurcation
55:29
but that now describes a state where we
55:31
go from
55:32
a spatial homogeneous spatially
55:34
homogeneous
55:35
solution spatially homogeneous system to
55:38
a system
55:39
that is spatially structured now and
55:42
that's also what's here on the right
55:43
hand side
55:44
yeah and you have this parameter r and
55:47
if you increase
55:49
this parameter r and you ask
55:52
whether or not you have a spatial
55:55
pattern
55:56
then you want to understand this
55:58
transition between
56:00
the stage where you don't have any
56:02
pattern no the homogeneous day the
56:04
boring state
56:05
and the stage where your system is
56:07
structured and it has a characteristic
56:10
wavelength
56:11
and so on and this parameter we call
56:14
r again and
56:18
an example of such a system now as you
56:21
can see here so if we
56:22
plug in some values for this function
56:25
here
56:25
we get and partial differential
56:27
equations where we have a time
56:29
derivative
56:30
here again on the on the left hand side
56:33
and we have some non-linear terms
56:35
here on the left hand side but we also
56:39
have and this
56:40
actually looks like something that we've
56:41
seen i probably was the supercritical
56:45
bifurcation but we also have
56:48
spatial derivatives of any order so here
56:51
we have
56:52
the second spatial derivative like
56:54
diffusion
56:55
term and we have a fourth order
56:58
spatial derivatives of the fourth
57:00
spatial derivative
57:02
with respect to space now so this is an
57:05
example of the kind of systems
57:08
that describe spatially extended
57:11
systems if we neglect noise
57:18
so how do we now study this kind of
57:20
systems
57:21
yeah so how do we study that the idea
57:24
is that like in many
57:28
cases in physics that we look very
57:31
closely
57:32
at these points here we look very
57:35
closely at the point
57:37
when we see a pattern emerge for the
57:40
very first time
57:42
now we go to the threshold value to this
57:44
bifurcation point
57:46
and the idea is that we
57:49
linearize around that suppose now you
57:52
have a system yeah so
57:54
think back about uh our original our
57:57
lecture from last time there we had
57:59
rotational
58:00
invariance now so we have rotation and
58:03
variance so we're pointing in different
58:05
directions
58:06
and then we ask how can we break
58:08
rotational
58:09
variance how can we make the system
58:13
globally point into one direction
58:16
and now we ask a similar question so we
58:18
start with a system that is
58:21
translationally invariant so that it's
58:24
homogeneous in space so we move it
58:26
around
slide 13
58:27
now from here to there and it doesn't
58:29
change and that means it's a homogeneous
58:32
in space now there's no structure in it
58:36
now how can we now break translational
58:39
invariance that's a similar question to
58:42
what we had about the
58:44
rotational invariance so how
58:47
and under which condition is a
58:49
translational invariant is broken
58:50
and the idea is that we start
58:54
with a homogeneous solution yeah
58:58
let's go back here we start with a
59:00
solution
59:01
where we have no pattern i like this
59:04
branch here
59:05
and the y-axis is something like that
59:07
quantifies a pattern
59:09
now so we have here we have this
59:10
homogeneous state
59:12
and then we look at small perturbations
59:14
around that and if we say
59:16
we hope that if we understand small
59:19
perturbations around this homogeneous
59:21
state
59:22
then we can actually learn something
59:24
about the real
59:25
macroscopic states that evolve
59:29
and that works
59:33
this idea works if we have something
59:36
like here
59:37
you know if we have something like here
59:39
this picture
59:40
where a pattern continuously emerge
59:43
emerges so we exchange some control
59:46
parameter
59:48
and then if we change this for control
59:50
parameter we first get a very weak
59:52
pattern
59:53
we get a stronger pattern and even
59:55
stronger pattern and so on
59:57
so this this bifurcation of how we get a
60:00
pattern is continuous
60:01
and one example here is the
60:03
supercritical
60:05
pitchfork bifurcation that is depicted
60:07
here
60:08
or that is resembled in a homogeneous
60:11
system by this kind of bifurcation
60:13
so how does this work also what we do is
60:16
we say
60:17
that our state that has a spatial
60:20
dependence and a time dependence
60:23
if gear is given by some homogeneous
60:25
state now we say the system is stable
60:28
in some boring homogeneous state
60:32
and then we have a little perturbation
60:34
around it
60:36
and now we ask whether this perturbation
60:38
will grow
60:39
or not and we're not asking just about
60:43
any percolation we make a specific
60:47
answer for these perturbations you know
60:51
to make it answers
60:55
oh sorry wrong color
61:00
bring ons us
61:03
for the growth
61:08
of periodic perturbations
61:15
let me see if i have that oh i don't
61:18
have two answers already here
61:19
okay so okay great so here we see
61:22
i don't have to write that down so we
61:25
make it answers for
61:26
periodic perturbations yeah and this
61:29
answers
61:31
looks as following that we say that our
61:33
little perturbation here
61:35
that we with a linear order because our
61:38
little perturbation has two components
61:41
one component describes
61:45
the time evolution of our perturbation
61:49
now and depending that has some rate
61:51
here some pre-factor sigma q
61:53
and whether the sigma q is positive or
61:56
negative
61:57
tells us whether this perturbation will
61:59
grow or shrink
62:02
and then we ask here then we have here
62:04
this
62:05
imaginary part now this can also be an
62:07
imaginary
62:08
this is this this complex part
62:11
where we have essentially a periodic
62:14
pattern
62:15
now that's the complex representation of
62:17
a periodic
62:18
pattern and here we have a pattern that
62:21
has
62:22
a wave vector q
62:26
and now we ask if we make this answer
62:28
for some
62:29
values of q
62:32
now for some values of q we have this
62:34
periodic perturbation
62:36
around this homogeneous state does it
62:38
grow
62:39
or does it not grow and we ask this
62:41
question
62:42
for every value of this wavelength
62:46
with which we perturb the homogeneous
62:48
states
slide 14
62:51
yeah and then several things can happen
62:54
that's another
62:55
transition uh now several uh things can
62:58
happen
62:58
so if the real part of the sigma that is
63:02
a function of q
63:03
in the end is negative
63:06
yeah then this homogeneous state is
63:10
stable
63:10
we say it's linearly stable and
63:13
and because this homogeneous state is
63:15
stable we don't expect to see any
63:17
spatial structure
63:19
to emerge now is this phi
63:24
if this real part of
63:27
sigma q is positive yeah
63:31
then this term here grows and grows and
63:33
grows
63:35
now then if for some value of q this is
63:38
positive
63:39
then we get a pattern because our
63:42
periodic activation
63:43
grows and constantly becomes bigger and
63:45
bigger
63:47
now if you look at this here we can have
63:49
this suppose we get this
63:51
sigma of q we get the rate of growth for
63:54
each vector for each wave vector q
63:56
here then we can plot this as a function
64:00
of our control parameter
64:03
and what you sometimes see is that this
64:06
function
64:07
has some function and it's always
64:09
negative
64:10
here and then at some value of r
64:14
of this control parameter we start
64:17
intersecting
64:18
with this zero point
64:22
and one wave vector
64:25
begins growing while the others are
64:27
still suppressed
64:28
and then if you increase r further
64:32
then uh you have a broader
64:35
number of a broader range of wave
64:38
vectors
64:39
that that start growing
64:42
and this wave vector qc of this
64:45
wavelength the corresponding wavelength
64:49
of our perturbation that for the first
64:51
time
64:53
becomes positive yeah in this
64:55
bifurcation
64:56
when we start seeing a pattern this we
64:59
say
65:00
gives us the wavelength of the final
65:02
pattern that gives us the length
65:04
of the final pattern and of course there
65:08
for this to work so we need to be very
65:10
optimistic are we
65:12
linear a lot we linearize you know we
65:15
say okay so this is something like this
65:17
and then we make this answer
65:22
we make an answers that uh and then
65:25
we say okay so this unlocks although
65:27
it's very small it describes whatever is
65:29
happening
65:30
on very large scales on on
65:33
even if we waited for a very long time
65:36
no
65:36
and this works very often but a
65:38
situation where it doesn't work
65:40
as you can see from here is where this
65:43
bifurcation is actually not continuous
65:45
but discrete
65:46
now for example like in super critical
65:49
in subcritical
65:50
bifurcations where you suddenly jump to
65:52
a pattern forming state
65:53
then this linear stability or
65:55
instability analysis
65:56
does not work
66:00
so yeah yes
66:03
is it in the chat at all okay
66:09
let's see how we can see the chat here
66:18
ah no you know for some reason
66:22
for some reason i can't see the chat can
66:25
you tell me the questions
66:34
awesome so what kind of perturbation you
66:36
also said you hope
66:37
i i saw i hope so i have most cancelling
66:40
headphones so i hope i have uh
66:44
the question correctly so there's a
66:46
question of what kind of perturbation do
66:47
you put into this
66:48
state so that so you hope that it
66:52
doesn't matter
66:53
now but the simplest answers you can
66:55
make
66:57
is just what i've shown here yeah could
67:00
have shown you
67:01
of course you can make different
67:02
perturbations now that are more
67:04
complicated but then the mathematics
67:06
gets too complicated and of course
67:08
what's happened here is i'll show you
67:10
now a
67:10
full calculation of this i'll show you
67:12
an example of course what happens here
67:14
is
67:15
what you could do is you just go to what
67:18
you're doing here is to go to various
67:19
space
67:20
yeah this perturbation that wrote down
67:22
and down here is something like the
67:23
fourier transform
67:25
of your perturbation yeah and then you
67:28
say that
67:28
one wave vector is the important one so
67:31
that these wave vectors don't really mix
67:34
so that's that's the idea behind that
67:36
now but the
67:37
idea is in linear stability and
67:38
stability analysis and that's why
67:40
they're
67:41
you always have to check it with other
67:43
methods uh
67:44
is that uh of course the kind of
67:48
perturbation if the kind of
67:49
perturbations that you make
67:51
here would be important for the end
67:53
result then this whole thing wouldn't
67:55
work
67:56
and it only works of course because you
67:58
are allowed to linearize
68:00
and uh because you assume that these
68:02
different
68:03
q values don't interact with each other
68:07
in some some some complex way
68:10
i don't know if this was the question is
68:12
basically you put in some
68:13
some very weak uh periodic perturbation
68:18
you know so you can have this unzots
68:21
which is essentially like a sine or
68:22
cosine
68:24
and see if this answer grows
68:27
or shrinks and that then tells you
68:31
how your what in the linear regime the
68:34
linear approximation
68:36
uh how your system reacts to
68:38
perturbations
68:40
and then you assume that if you wait
68:43
long enough you look macroscopically at
68:45
your pattern
68:46
like a jupiter that the wavelength that
68:49
grows strongest
68:52
once you go through this bifurcation
68:53
here this wavelength that grows
68:55
strongest is the one that will actually
68:58
then dominate
68:59
also in the long term
69:02
yeah so that's what you think it works
69:04
it works very well yeah so
69:06
but only under constraints under certain
69:08
conditions
69:11
um before i show you a specific example
69:14
there of course is there's a whole
69:16
classification
69:17
of these instabilities of how you can
69:20
generate a pattern
69:21
and that also depends it all depends on
69:25
how our sigma of q this function sigma
69:28
of q
69:31
looks like as you increase this r
69:34
parameter
69:35
that drives us from the homogeneous
69:37
state to a pattern state
69:39
now for example if you have there's a
69:41
type one instability that i just showed
69:43
you
69:44
and uh this is so in this type 1
69:47
instability
69:48
you have this parabola like shaped where
69:51
you have a maximum
69:53
at a finite wavelength or wave factor to
69:56
finite wave vector
69:57
and there's one specific wave vector
70:01
that will start growing uh
70:06
in a very well defined way now so so
70:08
here you have one specific
70:10
finite wave vector that will
70:14
dominate this process and that's called
70:16
type 1 and stability
70:18
there's a type 2 instability as well and
70:20
that's a little bit complicated so let's
70:22
let's
70:23
let's maybe first start with the type
70:24
well this is a type 3 instability
70:27
and there also you have a wave vector
70:30
that has the dominant
70:32
growth well that has the
70:35
the maximum of this function sigma of q
70:39
uh in this case is at q equals zero
70:44
also wave vector zero and wave x zero
70:48
means that you have a very long
70:49
wavelength
70:50
and that means your whole system is
70:52
essentially homogeneous
70:53
so instabilities of types type three
70:59
gives you situations where actually we
71:01
go from homogeneous state
71:03
to another homogeneous state the reason
71:06
why these
71:06
instabilities are important is that you
71:09
can also have situations
71:11
where the sigma of q has uh
71:14
an imaginary part and if the sigma of q
71:18
has an
71:18
imaginary part then this first part here
71:22
also
71:22
describes an oscillation yeah then you
71:25
have an oscillation not only in space
71:27
but also in time so you can have also
71:31
instabilities where you actually go from
71:33
a homogeneous state
71:35
to an oscillating state that can have a
71:37
pattern now that can have
71:38
a wavelength or it can be homogeneous
71:40
but it can be oscillating
71:43
and that's one of the prime examples of
71:44
the type three instabilities
71:46
now the type two instability in the
71:48
middle is a little bit
71:49
uh subtle because here you have
71:52
the the value factor q so the
71:56
homogeneous state
71:57
is always marginally marginal uh
72:00
marginally unstable the others has this
72:03
sigma of q
72:05
uh of of zero so it doesn't really know
72:08
whether to grow or shrink
72:10
and then as you increase your control
72:12
parameter
72:13
another wavelength becomes important
72:16
and ultimately dominates the system
72:20
if your value of r is large enough so
72:23
here you can have both so it's not
72:24
really clear what you get
72:25
you can have a uniform pattern or you
72:28
can have something that is just a very
72:30
large pattern with a very large long
72:32
wavelength
72:34
so and what you see here these three
72:36
kinds of qualitative instabilities that
72:38
you get
72:39
of how you can get from a homogeneous
72:41
state
72:42
to a pattern state that is described by
72:45
the wavelength or by a wavelet away
72:47
vector
72:48
resembles some kind of universality
72:52
and why did we get here in rosalita why
72:54
is there why are there only these three
72:56
types where
72:57
can we understand a large class of
73:00
dynamical systems
73:02
by just three classes the reason is that
73:05
we restricted ourselves to situations
73:07
that look like this here where we
73:10
linearize
73:12
where we can linearize around this
73:15
homogeneous state
73:16
and then suddenly when we linearize all
73:19
other complexities
73:21
become unimportant yeah
73:25
so this type of instability that you get
73:27
tells you a lot about what kind of
73:29
pattern
73:30
you have
slide 15
73:33
now let's have a look at a simple
73:35
example
73:37
yeah so i already mentioned briefly the
73:40
swift hohenberg equation
73:42
now that's this one here
73:46
we have the second order derivatives and
73:48
the fourth order
73:50
derivatives and then we have a linear
73:53
term
73:54
in phi and a non-linear sorry
73:59
that's fine
74:03
and the normally determined file and now
74:06
we make this ansas
74:08
that phi is equal to the homogeneous
74:10
state
74:12
plus some perturbation around this
74:16
and uh what we now do
74:21
what we now do is we linear-wise we say
74:24
that
74:25
we just look at very small perturbations
74:27
around this how much in the state
74:29
and we make our answers
74:37
we make our own dots at delta phi
74:41
is equal to some constant a either we
74:44
don't know
74:45
e to the power of sigma q times t
74:49
e to the i q x
74:53
and now we substitute this unless
75:00
into this swift homework equation
75:09
and what we get is now a relation
75:12
between sigma q and q
75:16
ah so what we get is what is called
75:19
dispersion relation we got a relation
75:22
sigma q
75:25
r minus q squared minus 1
75:28
squared now that's our dispersion
75:32
relation
75:33
and what you have in this dispersion
75:36
relation here
75:38
you can see on the left hand side that's
75:40
that's what you get if you plot it
75:42
now so for small values of r you have
75:45
this blue shape
75:46
and as you increase r you get
75:50
larger this this function moves up
75:53
that's just a constitute moves up
75:56
and then at some point you pierce
75:57
through this point
76:00
at a certain value of qc
76:04
yeah so
76:07
at this value of qc
76:11
dominant wavelength
76:14
or wave vector
76:22
is just equal to 1.
76:26
now we ask what is the growth rate
76:31
at the maximum so the growth rate
76:39
at the maximum at the maximum
76:43
of the real part of qc
76:47
well and this is just of cube or sigma
76:50
sorry
76:50
sigma q
76:54
just plug that n we get r
77:00
yeah so this maximum moves linearly up
77:03
and of course you could have already
77:04
guessed that just from the
77:05
shape of this here
77:09
yeah so what this means is that we get
77:14
pattern formation
77:19
for r
77:23
larger than zero no for r larger than
77:27
zero
77:27
we have a dominant wavelength we have a
77:29
perturbation
77:31
and this perturbation in this linear
77:34
approximation
77:35
grows if r
77:39
is larger than zero yeah then we have a
77:42
periodic perturbation and also that of
77:45
some wavelength
77:46
we put a different kinds of parotid
77:48
deviations of perturbations with
77:50
different wavelengths like short
77:51
wavelength
77:52
long rate law wavelength and then we see
77:55
which
77:56
of these perturbations survives and
77:58
which has the fastest growth rate
78:00
and that's what we say gives us the
78:02
pattern on the long
78:03
time and on the large scale now so here
slide 16
78:07
is a computer simulation of this
78:09
equation now
78:11
equation and this is the kind of pattern
78:14
that you get
78:15
and get to see in these equations and
78:18
of course i didn't tell you anything
78:21
about
78:22
how this pattern looks like what i the
78:25
only thing
78:26
that this linear stability analysis
78:27
gives you
78:29
is that you get the wavelength now so
78:32
you got here
78:33
the uh typical length scale
78:36
of such a pattern and you get the
78:39
conditions
78:40
under which such a pattern can emerge
78:43
now and in our case as the toneberg
78:45
equation we get these kind of patterns
78:48
once r is larger than zero
78:51
and of course the systems are more
78:52
complicated there's always a dynamic
78:54
belief that's only for example you have
78:56
to make sure that you understand
78:58
what's going on at the boundaries you
79:01
know so these boundaries can be very
79:03
very
79:03
important in selecting between
79:06
different kinds of patterns
79:11
okay so to conclude let me see okay
79:14
to conclude let me just have a look at
79:16
the time
slide 17
79:20
oh okay so we're we're talking quite a
79:22
while
79:23
okay so to conclude just give let's give
79:26
me
79:26
let me give you another example here
79:28
without going to mathematical details
79:30
but it's a very important example and
79:32
this example is called a reaction
79:34
diffusion equation
79:36
and it's called the reaction diffusion
79:37
equation because it consists
79:39
describes systems that consist
79:42
of reactions and diffusion now so for
79:46
example
79:46
here the time evolution of our field
79:49
phi of x t is described
79:52
by some local function or some local
79:55
reactions
79:56
they are for example susceptible to
79:59
infected
79:59
also some local reactions
80:03
plus a diffusion term plus diffusion so
80:05
random motion
80:07
and so so if i told you that the
80:09
susceptible
80:10
this successful and affected people
80:13
were running around randomly in space
80:16
then
80:17
this dynamics would be described by a
80:20
direction
80:21
diffusion equation so these are the two
80:23
components of a reaction diffusion
80:24
equation
80:26
and these equations have a famous
80:29
result that is named after alan turing
80:33
and what he showed is that you need an
80:34
erection diffusion system
80:36
you can get patterns so that at this
80:38
point people
80:40
were did not believe that you have
80:42
diffusion
80:43
now you have diffusion something that
80:45
smooths down everything
80:47
and you can get a pattern and alan
80:51
turing
80:52
studied the conditions under which you
80:54
can get patterns
80:55
in such reaction diffusion systems and
80:58
what he basically said is that you need
80:59
at least two components there will be
81:01
two chemicals
81:03
and then he wrote down equations of this
81:07
form
81:09
and then he did exactly what we did now
81:12
for this general equation what we did in
81:13
the previous
81:14
minutes namely it conducted a linear
81:18
instability analysis so you linearize
81:21
these equations and if you linearize a
81:23
general equation
81:24
you get here derivatives or some
81:27
jacobians
81:28
now of these functions and you get the
81:31
conditions that relate
81:33
the jacobian so the the linear behavior
81:36
of these functions
81:37
with the diffusion constant yeah and
81:39
what he then said
81:41
is okay if you want to have a pattern in
81:44
the erection diffusion
81:45
system with two components then one
81:48
species needs to be an activator
81:51
so it needs to be positively regulating
81:53
itself
81:54
and the other species needs to be an
81:57
inhibitor so it's negatively regulating
81:59
itself
82:00
and the other activator
82:03
and the second condition is that this
82:06
activator diffuses very
82:08
fuses very slowly and this inhibitor
82:11
diffuses fastly
82:15
so how can you get a pattern with that i
82:17
don't go through the calculation here
82:19
but the way you get a pattern here is
82:21
that you have a homogeneous
82:24
system and you have a little
82:26
perturbation
82:27
on the wavelength like we did in the
82:28
mathematical enzymes
82:30
then this activator activates itself
82:34
yeah it will grow but it
82:37
this activation but it will not smear
82:39
out you know the diffusion of this
82:41
activator
82:42
here is low while the inhibitor
82:46
also gets activated but it diffuses away
82:50
so locally the activator can build up
82:53
the concentration peak while the
82:55
inhibitor
82:57
spreads out and that's how you get a
82:59
pattern
83:00
in such a touring system and the
83:04
applicability of such turing systems is
83:06
of course
83:07
limited by this conditions here
83:10
now you need to have a diffusion cons
83:12
you have two components you need to have
83:13
a diffusion
83:14
difference of the diffusion constant of
83:17
a factor of 10 or so or 40
83:20
to see these effects and this is very
83:22
difficult
83:23
to achieve in biological systems
83:26
yeah one example where this seems to be
83:29
implemented
83:30
is lymph development i think that's
83:31
chicken here
83:33
where you see i think that's at the wing
83:34
of a chicken
83:36
how this evolves in
83:39
early development you know or in
83:41
development
83:42
of a chicken on the embryo and you can
83:45
see
83:46
here that you have these red regions
83:49
are regions where certain genes are
83:51
expressed now that are important for
83:53
development of bones or something like
83:55
this
83:56
and you can see how here this pattern
84:00
this touring pattern is established but
84:03
again like in the previous cases you
84:05
have a specific wavelength
84:08
yeah you see you know that gives you
84:12
a specific size of your body parts
84:15
of these fingers and
84:19
how does it work here despite having
84:23
this strong assumption on this
84:26
difference and diffusion constant
84:28
so this difference in diffusion
84:29
constants you only need if you have
84:31
really two components now if you have
84:33
four or ten components
84:35
then of course you can get an
84:37
instability
84:38
that gives rise to a pattern in a
84:41
touring system
84:42
even for um much weaker differences
84:46
in these diffusion content and in
84:47
biologically relevant contexts
84:51
okay so with this at uh i'd like to
84:53
finish so next week we'll start
84:56
digging more into epidemics or using
84:58
epidemics as an excuse
85:01
to do some non-equilibrium physics and
85:04
i'll hang around a little bit if your
85:06
case is there
85:07
there are any questions otherwise see
85:09
you next week bye
85:28 <br> [Music
85:36
oh there's a lot of things going on in
85:37
the chat
85:40
um