introduction
00:00
to our little uh lecture
00:03
so meanwhile uh most of my group
00:07
are in current time and i'm one of the
00:09
few people left
00:11
and um and ethan detang who used to join
00:15
me here in the lecture hall
00:17
uh has escaped to a safer place
00:20
and so now uh buddy tang is watching the
00:24
chat and
00:25
the participantsness and we'll let you
00:26
in and
00:29
moderate a little bit the chat and um
00:34
so it's a good time actually to be
00:35
working on some epidemic models that we
00:37
start
00:38
that we started working on last time and
00:42
uh in particular last time
slide 1
00:46
we were looking at the simplest possible
00:49
epidemic model you might think about
00:52
so let me share the screen just to give
00:54
you a little bit of an
00:56
um
01:01
of reminder
01:10
okay
01:32
okay
01:39
okay here we go so we introduced this
01:43
little epidemic model where you had
01:46
two kinds of people infected one
01:50
and the susceptible ones and the model
01:53
was very simple so these infected ones
01:55
could uh meet up with an affected one uh
01:58
within with a susceptible one
02:00
that's that maybe they weren't uh to a
02:01
party or so it just
02:03
used the same train yeah and then the
02:05
susceptible one uh
02:07
got infected and then you have the
02:11
second process
02:12
where if you are infected with a certain
02:15
rate also probability
02:17
per unit of time you get rid of your
02:20
disease
02:21
and you recover and go back to
02:23
susceptible
02:25
you know so you're a recovered person
02:28
so and then we went one step further and
02:31
we
02:31
put this very simple model on a lattice
02:34
in a spatial context
02:36
the simplest spatial context you can
02:38
think about is just having
02:40
lattice in one dimension yeah
02:43
so suppose that this letter looks a
02:45
little like the train like a train
02:47
and uh so on this little letters you
02:50
react
02:51
basically with your nearest neighbors if
02:53
you if you infect somebody
02:55
you're in fact your nearest neighbor but
02:57
uh not
02:58
anybody else yeah and then
slide 2
03:02
we saw that such a simple model gave
03:04
rise
03:05
uh to rich phenomenology depending on
03:08
the relation between the rate of
03:10
infection
03:11
and the rate of recovery which was set
03:14
to one
03:15
now if the infection rate was very large
03:18
then
03:18
we got a state where we had a large
03:22
fraction of individuals constantly
03:24
carrying the disease
03:26
and when the rate of infection was very
03:29
low
03:30
then we ended up in a stainless
03:31
so-called absorbing state
03:33
where the disease got extinct
03:37
now in between that we found that there
03:39
is a value of this infection rate
03:41
where infection and recovery just
03:43
balance
03:44
and this state was characterized uh
03:47
very in a very similar way to the ising
03:50
model
03:51
in equilibrium was characterized by
03:54
scale and variance
03:55
that means that these correlation
03:57
lengths that we looked at
03:59
along the space axis
04:03
and along the time axis we had two
04:05
correlation lengths
04:06
they both diverged that means we had
04:09
self-similarity
04:10
in this critical point
slide 3
04:15
yeah and then we went on and we derived
04:17
the launch of the equation
04:19
and from this launch of an equation we
slide 4
04:21
then just wrote down
04:22
the martin citra rose functional
04:26
integral and i'll show you that later
04:28
when we actually used it
04:30
when we'll actually use it for the
04:31
realization procedure
04:33
and for once we once we had written down
04:36
these actions at the margin et cetera
04:38
rose function integral
04:40
of course we can go to free of space and
04:42
write down
04:43
this action also in fourier space
slide 5
04:47
now that's the formalities yeah
04:50
that looks pretty complicated now
04:55
we have a problem right so this looks
04:57
pretty complicated
04:59
and uh we have a situation
05:02
where we have divergences where we our
05:06
correlation length is infinite infinite
05:10
and we don't really know how to deal
05:12
with this infinite correlation
slide 6
05:16
and this is exactly the situation where
05:19
we have developed in the last
05:21
70 years or so mathematical techniques
05:25
originally in
05:26
quantum field theory that allow us to
05:29
deal with these
05:30
divergences with these infinities
05:33
now we've made two observations here so
05:36
one is
05:37
scale scale invariance that i just
05:38
mentioned yeah scale variance
05:40
at this critical point there's no
05:43
distinguished length scale we zoom
05:45
in and the picture we get is
05:48
statistically
05:49
still the same as the original picture
05:54
the second observation is
05:57
that so and as a result of the scale
05:59
invariance we saw that
06:00
in equilibrium all of these
06:03
thermodynamic quantities
06:05
i like the like the free energy
06:08
the magnetization and icing model obey
06:10
power laws
06:12
as you approach the critical point so
06:14
they diverge with certain power laws
06:17
and the second observation is
06:20
that at this critical point
06:23
our empirical observation is for example
06:26
in the equilibrium but also in
06:27
non-equilibrium
06:29
that there are a lot of different
06:32
real world systems that are described by
06:36
the same
06:38
theory the same simple theory so for
06:42
example
06:43
you have the icing more you know the
06:44
icing model is super simple it's much
06:46
simpler than actually a real magnet
06:49
nevertheless the icing motor can predict
06:52
exponents
06:54
near the critical point of real magnets
06:58
also of different materials very nicely
07:03
and it even the icing model even can
07:05
predict
07:07
critical phenomena in completely
07:09
unrelated systems
07:12
like phase transitions in water if you
07:14
put water in a high pressure then you
07:15
have a phase where you have
07:17
like a coexistence of water and steam
07:20
and this you can predict these exponents
07:22
you can predict with the icing water
07:24
and this power of such simple models to
07:28
predict a large
07:29
range of critical phenomena is called
07:33
universality and both of these
07:36
observations
07:37
are the scale and variance the cell
07:39
similarity
07:40
and universality can be systematically
07:43
started with a renormalization group
07:46
the first thing you might actually think
07:47
about okay so why do we need actually
07:49
something complicated
07:51
now we just do a naive approach
07:54
uh similar to the one that we used
07:57
implicitly in these lectures about
07:59
pattern formation
08:00
and non-linear dynamics now what we
08:03
could do is okay we say okay so we just
08:06
instantaneously zoom out of the system
08:09
yeah go to
08:10
wave vector zero so in finite wavelength
08:13
you look only at the very large
08:15
properties
08:16
and just pretend that we can average
08:18
over everything we just look at average
08:20
quantities
08:21
that's that should be called mean field
08:23
theory where you pretend
08:25
that your system the state of the system
08:28
at a certain
08:28
site is governed by
08:32
a field that rises over
08:35
basically an average over the entire
08:37
system
08:38
yeah so this mean field theory where you
08:41
instantaneously
08:42
go to the macroscopic scale
08:46
this doesn't work in others it fails to
08:49
predict these exponents that we get at a
08:51
critical point
08:52
and it also doesn't give any uh reason
08:55
why we should have universality
08:59
now renewalization does something very
09:02
smart now that provides a systematic way
09:06
of connecting these microscopic degrees
09:09
of freedom that are described for
09:10
example by hamiltonian
09:13
to uh macroscopic descriptions
09:17
now it does so by taking into account
09:21
all the intermediary scales between
09:24
micro
09:24
the microscopic level and the
09:26
macroscopic level
09:28
and as you uh my guess from these
09:30
critical phenomena if you have scale
09:32
environments and all
09:33
scales are equal and equally important
09:36
this approach where you
09:38
take into account all of these scales
09:41
one at a time
09:43
now going from microscopic to
09:44
macroscopic which
09:46
makes much more sense than arbitrarily
09:48
focusing only on the largest scale as
09:50
you do in new york theory
09:54
now so rememberization group the
09:56
romanization
09:57
group allows us to go from the
09:59
microscopic
10:00
level now that is described by some
10:03
hamiltonian
10:04
by some symmetries all the way to the
10:07
macroscopic
10:08
level without ever forgetting what is
10:11
going on
10:12
in between now that's the power of the
10:14
renalization group and in this lecture i
10:16
will show you
10:18
how this is actually implemented
10:21
now how we can actually do that
10:30
okay so there's a little bit of a lot of
10:33
information i should have split this
10:34
slide
slide 7
10:35
um so
10:39
so so we are now going to follow this
10:41
program now going from microscopic
10:43
to macroscopic one scale
10:46
one length scale at a time now we're
10:49
starting with something microscopic that
10:51
is super complicated don't think about
10:53
the icing model
10:54
think about something that has 10 000
10:57
different couplings or so
11:00
now we take that model with 10 000
11:03
different parameters
11:05
on the microscopic scale so the real
11:07
physics
11:08
that's going on on the microscopic scale
11:10
that's complicated
11:12
yeah it's much more complicated than the
11:13
icing model and it just as the disease
11:16
is much more complicated as the ice i
11:18
model that i showed you
11:21
and then we go to larger and larger
11:24
scales
11:26
and hope to end up with something that
11:28
is simpler
11:30
and less correlated than on the
11:32
microscopic scale
11:34
yeah so how do we do that so realization
11:37
consists of three
11:38
steps uh the most important step
11:42
now the the actual uh what's actually
11:46
underlying renovation
11:48
is a course grading step now in this
11:51
course grading step
11:53
you can think about uh that you
11:57
unsharpen an image suppose you take a
11:59
photo
12:00
and then you can focus you can make the
12:02
picture
12:04
uh sharp or less sharp yeah by turning
12:07
like the lens
12:08
and the ring on the lens if you still
12:10
use a rear camera
12:11
yeah so you can make a job or let's try
12:13
or think about the microscope
12:15
you can have to turn some knobs to make
12:17
the image sharp or not sharp
12:19
and the first step we
12:22
cause grain so we cause grain
12:26
and that means literally that we make
12:28
the image that we're looking at in the
12:30
system
12:31
unsharp so that we that's what we do
12:35
yeah so we
12:36
mathematically that means if you uh if
12:39
you close grain if you make something
12:41
out sharp unsharp then that means that
12:43
you integrate out
12:46
fast fluctuations or short range
12:48
fluctuations
12:49
i'll show you on the next slide how this
12:51
looks like intuitively so first you get
12:53
rid of
12:53
all these uh fluctuations that happen on
12:56
very small length scales
12:59
and you have to perform an integral to
13:01
do that
13:02
and this integral of course is very
13:05
complicated
13:06
to calculate now the second state
13:10
we have a new field yeah
13:13
now let's say phi average
13:17
but because we have course grade this
13:19
new field
13:21
our new image is blurry but
13:25
it is also not on the same length scale
13:27
as before
13:29
yeah so it's just blurry but what we now
13:32
have to do
13:33
is to rescale length
13:36
to make the structures that we have in
13:38
this new image
13:40
similar to the structures that we had in
13:42
the original image
13:43
now so that means we need to rescale
13:45
length scales
13:48
by a factor of b
13:52
and the other thing that happens if you
13:53
make things blurry
13:55
is that you lose contrast in the image
13:58
so the image looks a bit dull so we
14:02
also need to now to increase the
14:03
contrast again
14:05
and we do that by rescaling our
14:09
fields as well so these are the three
14:12
steps of renormalization as if we first
14:14
integrate out very short
14:18
fluctuations happening on the smallest
14:20
length scales
14:22
and the second step and the third step
14:25
make sure that once we have integrated
14:28
out
14:29
these uh these short length skills
14:32
that our new theory that we get is
14:34
actually comparable
14:36
to the previous one so we have to reset
14:39
the length scales and we have to reset
14:41
the fields the contrast of the field by
14:44
multiplying them
14:45
with appropriate numbers
slide 8
14:50
so how does that look like so we start
14:52
with a field file
14:53
now that has short range fluctuations
14:56
you know fast fluctuations in space
14:58
now for example um
15:02
yeah here these wobbly things yeah these
15:04
are fast fluctuations
15:07
but it also has slow fluctuations that
15:09
would be
15:11
that would be here a slow fluctuation
15:15
now you have fluctuations on all
15:17
different length scales and now
15:19
we cause grain that means we integrate
15:21
out
15:22
these fast fluctuations and what we get
15:25
is a new field if i uh this phi
15:29
average here because phi r
15:33
that is smooth that doesn't have these
15:35
small
15:36
wiggles anymore but it is smooth here
15:41
now so we have course grade the fields
15:44
and now we have coarse grains the field
15:46
we reset the length
15:48
rescale the length scale
15:51
x prime to make fluctuations on the new
15:57
field comparable to typical fluctuations
16:00
on the
16:00
original field the second step
16:05
we then renormalize um
16:08
there's actually no three here because i
16:11
started
16:12
i know there is a three okay so uh
16:16
as a final step and we have to rescale
16:18
the fields now we have to
16:20
change the magnitude of the signal here
16:23
to make it comparable
16:24
to the original signal over here
16:28
now this sounds very intuitive
16:31
and very simple but of course in reality
16:34
it's quite difficult
answering a question
16:39
so let's see how this works uh
16:42
what's the result so can i have a
16:45
question here
16:50
um in the previous slide
16:54
when we re-normalize uh fi
16:57
prime we essentially doing it
17:01
because we want uh whatever abs
17:04
magnitude or value of phi prime is to
17:07
equal
17:08
uh the value of five the original field
17:11
yes
17:11
exactly so what we want to do is so so
17:14
we do this procedure not only once
17:16
but many times now the next step will do
17:18
that many times
17:20
and uh each time we do these three steps
17:24
we get a new hamiltonian or a new action
17:28
and this new action will be
17:31
different to the original action
17:35
it will be different for trivial reasons
17:38
namely because once we cause when
17:41
think about this course grading as an
17:43
average instead
17:44
now you have i'll show you later a
17:47
specific example suppose you average
17:49
uh in some area here and that's why
17:52
that's how you smooth
17:54
yeah when you average you know think
17:56
about the spin system you average
17:59
then you don't have plus minus plus one
18:01
or minus one
18:02
but the new average field will be plus
18:04
0.1
18:06
and minus 0.1 now just because you
18:08
averaged
18:09
over in the field yeah but you don't
18:11
want the new spins to live in this world
18:13
of plus
18:14
0.1 and minus 0.1 but you'd want them
18:18
also
18:18
live on the scale of plus minus 1 just
18:21
to make the
18:21
hamiltonians comparable now so this is a
18:24
trivial effect that you get by course
18:26
grading that you don't want to have
18:28
to dominate your results and we need to
18:31
get rid of that
18:32
this trivial rescalings of the fields
18:35
and of the
18:36
of the um of the length scales just by
18:39
explicitly taking the step and saying
18:41
okay
18:42
now i have co-strained my field now i
18:45
have to reset the length scales
18:46
and i have to reset the amplitude of my
18:49
fields
18:50
by multiplying these these quantities
18:53
with appropriate numbers
18:56
i just want to have comparable things at
18:58
each step
19:00
yes thank you so much uh but regarding
19:03
this particular point
19:04
you showed in the first slide that phi
19:07
prime
19:07
phi is divided by b uh in the
19:10
in introductory slide remember
19:12
normalization i couldn't understand that
19:15
yes here yeah five in the next one
19:19
five bar by b or um
19:23
so which which one third point
19:26
third point yeah that's just a number
19:28
yeah we don't know this number yet
19:31
we don't know it yet we can get it by
19:33
dimensional analysis
19:34
uh very often um so we don't know this
19:37
number yet
19:38
it doesn't the same b okay it's not
19:42
from one i think it's an s actually
19:47
so so it doesn't have to be b it depends
19:50
on the dimensions of your field
19:53
typically b to the power of something
19:55
also
19:56
that doesn't have to be b and it depends
19:58
on the on the dimensions of your fields
20:00
here
20:01
um okay just at this point we just say
20:03
okay so we have to do something with our
20:05
fields to make them comparable
20:08
now think about an average now you can't
20:10
get average and if you're doing average
20:12
all the time
20:13
yeah then your uh the central limited
20:16
theorem will be that the
20:17
average like in a disordered system will
20:20
get very very small
20:22
yes the variance of this average will
20:24
get very very small
20:26
and we don't want this effect to happen
20:28
because it destroys this basically
20:30
or if i what we actually want to look at
20:33
now what we actually want to look at is
20:34
how does the theory in itself the
20:36
structure of the theory
20:38
change as we go through the scale and we
20:41
don't want to have these effects that
20:42
come
20:43
by uh by uh
20:46
just that we that we can't compare
20:49
um suppose you compare like uh uh
20:53
you compare um the velocity of a car
20:56
you live in the uk also or in the united
20:57
states and you compare the velocity of
20:59
the car in miles per hour
21:01
or in kilometers per hour now you have
21:04
to you cannot compare that
21:05
but pure numbers you have to do
21:07
something you have to be scared by 1.6
21:09
to make them comparable and here also we
21:12
go through these scales look
21:13
like meters uh kilo kilometers miles and
21:17
so on
21:18
and to make these things comparable all
21:20
the different lengths that we have to
21:21
rescale
21:22
them all the time yeah just that way
21:24
that we're all the way talking about
21:26
um the same thing um
21:30
so in germany we say we uh you cannot
21:32
compare apples and oranges
21:34
different things you have to uh
21:37
if you compare apple and origins then um
21:40
then then you're doing something wrong
21:43
in other words to compare apples
21:45
to different kinds of apples so to say
21:48
yes but we always want to talk about
21:50
apples and not about miles and
21:52
kilometers now so that's that's the idea
21:55
about this rescaling step
21:59
i'll show you later an example where
22:00
this rescaling step is already implicit
22:02
of course you can choose this course
22:06
grading step in a way
22:07
that it doesn't change the magnitude of
22:10
the field
22:12
yeah so that you can choose the course
22:14
grading step in a way here this one
22:16
in a way that it doesn't change the
22:18
magnitude of this field and then you
22:19
don't have to do this renormalization
22:21
step
22:24
now but the principle you will have to
22:26
do that
22:28
okay so now we have these three points
22:31
now we rescale uh we of course grain
slide 9
22:35
rescale and renormalize and once we do
22:39
that
22:39
our action or our equilibrium or
22:41
hamiltonian
22:43
will become a new action
22:48
now s prime and say we
22:52
did this rescaling on a very small
22:54
length scale
22:55
dl now this s prime
23:00
is then given by some operator r
23:03
of s and if we do that repeatedly
23:08
you know so what we then get is a
23:11
remoralization group flow our g
23:13
flow and that's basically
23:17
the differential equation for the action
23:21
yeah the s over the l
23:26
is then something like this r
23:29
of s so we normalize one step further
23:33
minus the previous step
23:38
you know so we change we do these three
23:41
steps
23:41
just a little bit not just and we call
23:44
square
23:45
we integrate out a very small additional
23:48
scale
23:50
yeah and our action is then different on
23:53
the next scale
23:54
of course we assume that this is somehow
23:55
continuous and well behaved
23:58
and then we'll have a flow equation
24:01
of our action and of course in reality
24:04
this will not be a flow equation of our
24:06
action
24:06
but of the parameters that define our
24:09
action
24:10
now suppose so how does it look like
24:18
now so here is in some space of all
24:21
possible actions
24:23
of some parameters p1
24:27
p2 p3
24:32
now and this action now think about the
24:34
uh
24:35
think about our non-linear dynamics or
24:37
dynamical systems lecture
24:39
what we can this is this is a
24:41
differential equation here
24:44
it looks complicated but this is a
24:46
differential equation it tells you
24:47
how the parameters of our action
24:51
change as we cause gradients we do this
24:54
renormalization procedure
24:57
now and this gives us a differential
24:59
equation
25:01
and if you have a differential equation
25:02
that will be highly non-linear of course
25:05
now we can do the same thing as we did
25:07
in this lecture or nonlinear system we
25:09
can
25:10
derive a phase portrait now in the space
25:12
of all possible actions
25:14
of all possible models where does this
25:17
minimalization group
25:18
group flow carry us
25:22
so let's start with a very simple
25:27
line
25:31
that's the line in the space of all
25:33
possible models
25:34
now this is the line of models
25:37
that actually describe our physical
25:39
systems
25:40
now think about different combinations
25:42
of temperature
25:43
and magnetic fields in the icing model
25:48
now so this is this is where this model
25:50
lives in this space
25:52
now if we take the right parameter
25:55
combinations
25:58
we will be at some critical point
26:06
yeah so there's no flow yet now so this
26:09
is just
26:10
uh the the the the range of different
26:13
models that we can have for example in
26:15
eisenhower that correspond to some real
26:17
physical system
26:18
but there of course there are many other
26:20
model in the models in this space
26:22
that don't describe our physical system
26:24
now they don't describe a magnet but
26:26
something else or that there are not
26:28
given by a simple hamiltonian with
26:29
nearest neighbors
26:30
interactions but by something that has
26:33
long-range interactions or something
26:35
very weird
26:36
now there's a space of all possible
26:37
models is very large
26:40
now we have this critical point and
26:43
other models in this space also have
26:46
critical points
26:47
now and these critical points live on a
26:51
manifold
26:58
now they leave on a manifold you know
27:01
that is the critical manifold
27:09
you know so that's all the points of
27:11
this critical manifolds are critical
27:13
points
27:14
of some actions
27:17
and our action our critical point of our
27:20
action
27:21
is also on this manifold but all
27:24
other points of this manifolds are also
27:26
some critical points of some other
27:29
actions so
27:33
what happens now now
27:37
we are close to the critical point let's
27:39
say
27:40
we're here now very close to the
27:44
critical point
27:46
and now we really normalize
27:50
we go through we make this procedure of
27:53
renormalizing
27:54
the core straining going to larger and
27:56
larger scales and rescaling our fields
27:59
and lengths
28:00
and then our anatomy or our action will
28:03
change
28:04
so it will flow in this space
28:07
in some direction
28:11
so where does it flow to in the
28:13
non-dynamics lecture
28:16
we've seen that what determines such
28:18
dynamical systems
28:20
are fixed points and
28:24
what you typically have to assume in the
28:25
real normalization group theory
28:28
that there is some fixed point
28:31
on this critical manifold yeah
28:36
the fixed point of the realization would
28:39
flow
28:40
on this critical manifold
28:44
and now what happens with our flow
28:50
of course we will go to this fixed point
28:55
and then we might go away again
28:59
now the sixth point the fixed points
29:02
like in the dynamic resistance lectures
29:04
determine the flow of our dynamical
29:07
system
29:09
now what is this fixed point here this
29:11
fixed point is not the critical point
29:14
it's the critical point of some other
29:16
model
29:18
but this critical point here
29:23
has stability yeah just like in normal
29:25
domains so this is non-linear dynamics
29:27
here
29:28
so it's very often exactly what we did
29:30
in this lectures before
29:31
so we asked what is the flow now we ask
29:33
them to ask about the stability
29:35
of this fixed point now so this fixed
29:39
point
29:40
has stable directions now you perturb
29:44
and you get pushed out and unstable
29:46
directions
29:48
yeah typically or by definition
29:52
the directions on the critical manifolds
29:55
are stable now that's how this manifold
29:59
is actually defined
30:00
you can and there's a theory and only
30:02
resistance
30:04
that tells you that and there are also
30:07
other directions
30:09
that are not stable now let's have them
30:12
in green
30:14
for example the way i drew this these
30:17
are the ones
30:19
in this direction
30:26
so this determines the stability of the
30:28
flow of our fix
30:29
of our our system and if we have only
30:32
one fixed point this one fixed point
30:35
will tell us what happens to the flow of
30:37
our system just like in nonlinear
30:39
dynamics
30:40
lecture okay so
30:43
now we re-normalize
30:46
it because we didn't start exactly at
30:48
the critical point we stay in the
30:50
vicinity of this manifold here
30:52
here and this critic this fixed point of
30:55
the flow will do
30:56
something to us no it will push us away
30:59
or will attract us and now you can
31:04
do the same thing that you do in
31:05
learning your dynamics namely you linear
31:07
wise
31:08
around this fixed point so we say that
31:11
our action
31:13
[Music]
31:16
called linear stability
31:21
you linearize around this fixed point so
31:24
you say
31:25
that our action is equal to the action
31:29
at this fixed point
31:32
plus sum over different
31:35
directions i who operate is i
31:39
h i
31:45
times b to the power of lambda i
31:52
times
31:56
q i yeah so this here are
32:01
these two eyes are the eigen directions
32:08
so these are operators you can think
32:09
about this as operators so like eigen
32:12
direction of these operators and
32:16
the b is our course grading scale
32:30
the h tells us
32:34
how far we are away from the fixed point
32:42
and the s tells us uh is just the action
32:48
that we have at this fixed point
32:51
you know so we it's the same thing for
32:53
all of this was once we have once we
32:55
have this mineralization group flow
32:58
we're in the subject of non-linear
32:59
dynamics and we use the tools
33:02
of nonlinear dynamics renalization group
33:05
theory this language is slightly
33:09
different
33:10
you know so that's that's why you have
33:12
this b to the lambdas and so on
33:14
here and you separate the h i from the b
33:17
to the lambda
33:18
that's just the framework the how you
33:20
write it in the realization
33:23
theory for convenience reasons but what
33:26
we do here is
33:27
a simple linear stability analysis
33:30
of a non-linear system i will
33:34
re-linearize around the fixed point
33:36
we see how whether perturbations grow or
33:39
shrink
33:40
in different directions yeah and that
33:43
characterizes
33:44
then our nonlinear system and now we can
33:47
ask if we perturb around this fixed
33:50
point
33:51
in one direction i does it grow
33:54
this perturbation or does it shrink
33:57
lambda i
33:58
is larger than zero now this bi
34:03
is larger than one or b is larger than
34:07
one
34:08
now so lambda i is larger than zero then
34:11
our perturbation will grow
34:13
yeah perturbation
34:20
growth and then we say this direction or
34:23
this operator q
34:24
i you can also think about q i as one of
34:27
the
34:28
terms in the action now think about one
34:31
of the terms in the action
34:32
or one of the terms in the hamiltonian
34:36
this direction qi
34:40
is then called relevant
34:45
why is this relevant what we call when
34:46
do we call this relevance
34:48
you know if this perturbation grows we
34:51
are in this green direction here that
34:53
pushes us away from the critical point
34:56
so that means
34:56
that if we are an experimentalist
35:00
and if we want to tune our system
35:03
to get into the critical point
35:07
then
35:10
then we know that we have to turn
35:13
these relevant parameters qi
35:18
now this relevant parameter for which
35:19
this uh that are
35:21
unstable directions of this fixed point
35:25
now and if we have a relevant directions
35:28
we also have irrelevant directions
35:30
lambda is smaller than zero
35:32
and these directions are called
35:34
irrelevant
35:35
now perturbation
35:43
shrinks that means that
35:47
qi is irrelevant
35:53
that means the qi in this case is not a
35:56
parameter that drives us into this
35:58
critical point
36:01
and then we have the case that lambda i
36:03
is exactly equal to zero
36:05
then we don't really know what to do
36:08
then this q
36:09
i is
36:12
marginal and we cannot tell from this
36:15
linear stability analysis alone
36:17
from the linear realization around this
36:19
fixed bond we cannot tell alone
36:21
whether this perturbation will grow or
36:24
shrink and we have to use
36:25
other methods okay
36:28
so what happened now so we
36:31
started our theory close to the critical
36:34
point
36:35
we did this rememberization group
36:37
procedure
36:39
course grading rescaling renormalization
36:43
and then in this procedure
36:46
our action or our model will flow
36:49
through the space of all possible models
36:52
yeah
36:55
and then we ask where does it flow to
37:00
and we look at the non-linear dynamics
37:01
lecture and ask
37:03
so where does such a nonlinear system
37:05
drive us to
37:07
and our nonlinear dynamics lecture will
37:09
tell us look at the fixed points
37:12
right and in this realization flow you
37:14
also have fixed points you assume that
37:15
you have a fixed point
37:17
and this fixed point tells us about
37:21
what is going on on the macroscopic
37:24
scale now that determines this fixed
37:26
point
37:26
determines which is the end result of
37:28
our minimization group
37:30
procedure
37:33
so at this fixed point is characterized
37:36
by stability
37:38
it has a finite number of relevant
37:40
directions
37:43
now it's had a finite number of
37:44
parameters that are actually important
37:47
to change if you want to go to the
37:48
critical point
37:51
and because you only have a finite
37:53
number of directions that are relevant
37:56
you usually get away with models that
37:58
also have this very finite number
38:01
of parameters instead of models like a
38:04
bigger magnet or so that is a very
38:06
complicated geometry and everything
38:08
there are instead of model that has 15
38:10
000 parameters
38:12
now you get away with a finite number of
38:13
parameters that are given
38:15
by the relevant eigen directions
38:19
of this fixed point now so now what
38:22
happens if we have a different model so
38:24
now this is now the
38:25
magnet one that this is not a
38:29
magnet or material one we can also look
38:31
at a magnet at another of another
38:33
material
38:34
now so this magnet one
38:40
and now we have here
38:47
a magnitude
38:51
now each of them at the physical real
38:53
microscopic level is destroyed by 15 000
38:56
parameters or whatever something very
38:57
complicated
38:59
and for this magnitude we can do the
39:02
same procedure
39:04
we renormalize
39:07
and we while we renormalize we will end
39:10
up at the vicinity of this fixed point
39:14
and in the vicinity of the fixed point
39:17
the behavior of the flow is determined
39:20
by a finite number
39:22
of parameters again
39:26
and both of these magnets here are
39:29
under renormalization now on the large
39:31
scale repeat these
39:33
procedures determined by the same fixed
39:37
point
39:37
about the same point but this one here
39:41
and because they're determined by the
39:42
same fixed point with the same stability
39:46
and with the same properties of how they
39:48
go
39:49
uh of how the flow behaves around this
39:51
fixed point
39:53
that's why these two magnets here are
39:55
described macroscopically by the same
39:57
theory
39:58
and that's then the reason why we have
40:01
universality
40:03
so in this way in this very general way
40:05
so we look of course in our in
40:07
more detail the renal isolation group
40:10
theory gives us a justification for why
40:14
only a finite number of parameters
40:17
matter
40:17
on the or finite a limited level of
40:21
description is sufficient to describe
40:23
large scale properties of a large
40:26
number of very different systems and the
40:29
reason is
40:30
at this critical point they're
40:33
described macroscopically by the same
40:36
fixed point of the renewalization
40:39
workflow
slide 10
40:42
now how does this look in detail
40:47
so the very first or the
40:50
most uh simple way of doing
40:53
remoralization
40:55
is to take what i said initially about
40:58
this
40:59
defocusing about this course training
41:01
literally
41:02
and do the whole procedure in real space
41:06
now suppose you have here a lattice
41:09
system
41:11
and also suppose you have this lattice
41:12
system here and you have some
41:15
spins here and what you can do then to
41:19
cause grain
41:20
is to
41:24
you know what you can do to coarse grain
41:26
is to create
41:27
boxes or blocks now of a certain size
41:31
and then to calculate this cross here
41:34
that is a representation of all of these
41:37
microscopic spin for each block
41:40
yeah so we have this uh spins here
41:44
like what are 16 in each block and you
41:48
now transform them to a single number
41:50
you can do that by averaging over them
41:52
or you can take
41:54
you can say that i take the spin
41:58
that is the majority of these other
42:01
spins here
42:02
if the majority goes up then my new spin
42:06
that describes the entire block
42:07
will also go up and this would be a way
42:11
to get a rid of this renormalization
42:14
step if you say i take the majority
42:18
i take a majority rule so this new spin
42:20
here x
42:21
will take the value of the majority of
42:24
the original spins
42:27
then the new spin will also be plus or
42:29
minus one
42:31
if my new spin yeah and i don't have to
42:33
renormalize them because it has the same
42:35
values as the original splits
42:38
if i take the new spin as the average
42:43
over all of these spins here then
42:47
i typically get a very small number the
42:49
average won't be one or minus one but it
42:51
will be
42:52
0.3 or 0.1 or 0.5 or so but it will not
42:57
be
42:57
one or minus one in most cases yeah so
43:00
in this case if i perform this procedure
43:02
i would have to renormalize
43:04
you know and i have to would have to
43:05
rescale my fields
43:08
to make them comparable to the original
43:10
step
43:12
yeah so now we have these blocks
43:17
and we define some new spin that
43:20
describes each of these blocks
43:23
and now we write down a new model
43:26
a new hamiltonian for these new spins
43:29
here
43:31
and what we hope is that the spins that
43:35
we have here
43:38
in this new system this course-grade
43:40
system are described by a theory
43:43
that is structurally very similar to the
43:45
original theory
43:48
and this hope is actually justified by
43:52
the observation of
43:55
scale and variance now so if your system
43:58
is scaled in variance we can hope that
43:59
if we zoom out
44:01
and our system is statistically the same
44:04
then then our partition function
44:06
or our action will also be the same
44:09
just with some different parameters now
44:11
that is the hope that is underlying
44:13
renewalization group procedures with
44:16
these
44:16
block spins here what you typically get
44:19
is that you get
44:20
higher order turns all the time you know
44:23
so that's this hope is not
44:24
mathematically super precise uh but
44:28
that's what you have to assume in order
44:30
to achieve anything
44:33
okay yeah
44:36
okay so we call screen and then
44:40
we rescale the second step so that the
44:42
distance between these
44:44
spins here the new spins is the same
44:47
distance as we had
44:49
between the original spins now that's
44:52
what we have to do anyway
44:53
and that's why how we divide length
44:55
scales
44:57
by the same factor that corresponds to
45:00
the size of our boxes
45:02
yeah and now the lengths are the same as
45:04
before
slide 11
45:09
so let's do this procedure in a very
45:13
simple case which is the 1d
45:16
ising model
45:21
now so the one deising model now is
45:24
written the tradition function
45:26
it can be written in like a long form
45:30
in this way here that i sum up
45:33
all combinations of nearest neighbor
45:36
interactions
45:38
now that's just the hamiltonian here of
45:40
the ising model without an external
45:42
field
45:43
now and then i have to have a sum about
45:45
all possible values of the sigma i's
45:48
of the of the that my fee that my all
45:51
the possible values
45:52
that the sigma can take that gives me my
45:54
partition function
45:57
now what i do know is the first
46:01
course grading stuff now this first
46:04
coast grading step
46:06
means that these red spins here
46:09
like with the all these these black
46:11
spins are the white splits
46:13
now i'll integrate out the white spins
46:17
here in this picture i'll integrate
46:20
these ones out
46:23
now every second spin all even spins
46:26
and if i do that i will get a new theory
46:30
that is described by interactions
46:32
between these
46:33
uneven spins that are and these
46:36
interactions occurred to us are
46:37
depicted in these red with these red
46:41
lines
46:44
okay so and the stars is actually very
46:47
simple activated talks
46:49
so that's so for if we just do it for
46:56
for sigma 2 now we just sum
46:59
out we take the terms that correspond to
47:02
sigma 2
47:04
and we get that that is equal to
47:08
not many terms and then we have the
47:11
contribution from sigma 2
47:14
uh k sigma 1
47:17
plus sigma 3 now that's what what's left
47:20
plus e to the minus k sigma 1
47:25
sigma 3
47:28
and then all the rest e to the k
47:32
sigma 3 sigma
47:36
4 plus four sigma
47:40
five and all the other splits
47:44
so what i just that said is that sigma
47:46
two can have two values
47:48
minus one plus one yeah and i just
47:51
substituted i expected
47:53
explicitly now set sigma 2 to plus 1 and
47:56
minus 1
47:57
and perform the sum and that's what i
47:59
get then here for this first
48:01
term and now i can do that for all
48:12
even sigmas and then what i get is
48:16
exactly the same thing sum over
48:20
many terms e to the k
48:23
sigma 1 plus sigma 3 as before
48:27
plus e to the minus k sigma 1
48:31
sigma 3. now that was the original one
48:34
where we set sigma
48:36
2 to -1 and then we get the same thing
48:42
e to the k for the next
48:45
term now for the next interaction sigma
48:48
3
48:49
plus sigma 5
48:54
e to the minus k sigma 3
48:58
sigma 5
49:01
and so on yeah
49:07
for all the other terms yeah
slide 12
49:13
so now the idea is
49:18
that because when we are at a critical
49:21
state
49:23
that we expect our partition function to
49:25
be self-similar when we call screen
49:28
the statistics of the system remains the
49:31
same as we zoom out
49:33
and that's why we also expect
49:36
the quantity that gives us the statistic
49:40
statistics the partition function to be
49:42
self-similar as well
49:45
and what we now do is that we find
49:49
a new value of k prime
49:53
and some function f
49:56
of k that tells us that we
49:59
that these terms that we got
50:02
here sigma 1 sigma 3 we always got the
50:06
sum for each coupling
50:08
they should take the same form as the
50:11
original hamiltonian but with some
50:14
pre-factor here
50:17
and some new coupling here
50:20
but the form should be the same as
50:22
before
50:24
now as i said this is not usually
50:27
well justified but we have to do that in
50:29
order to do anything
50:31
and if you require that if you do some
50:33
algebra you will find that if you set
50:36
k prime to this one one half
50:39
logarithm hyperbolic cosine of 2k
50:44
and the function f of k to this year
50:48
then it fulfills this condition yeah
50:54
so now we can plug this in now so if we
50:59
if we use that
51:01
then our hammer tilt our partition
51:05
function
51:06
will read again we have many
51:10
terms f of k
51:14
e to the k prime
51:17
sigma 1 sigma 3
51:21
f of k e to the k
51:24
prime sigma 3 sigma 5
51:29
and so on
51:32
now and this is just the same we can
51:35
write this now
51:36
as a new partition function that has a
51:40
new prefactor
51:42
f of k now we pull this out this
51:45
prefactor
51:46
f of k and we have that n over two times
51:51
times a new partition function that
51:55
depends on the new system size
52:00
and a new coupling k prime
52:04
yeah so we have this down this one
52:06
renormalization step
52:08
we get a new partition function that
52:10
looks exactly the same as the old one
52:12
in structure but we have a new coupling
52:15
k prime
52:17
and a pre-factor here that is this
52:20
function
52:21
f and that also depends on the coupling
52:25
also what we did here is that we now
52:27
have
52:31
a relationship between the partition
52:34
functions
52:35
at different stages of the
52:38
renewalization procedure
52:45
yeah and now
52:50
what does it mean look at
52:53
this one here k prime
52:57
this is already a description
53:02
now this is already a description of how
53:04
our coupling
53:05
one parameter k prime
53:09
depends on the value of this okay now
53:12
how this parameter k
53:13
evolves in this course grading procedure
53:17
in this renewalization procedure
53:20
you know so this k that gets updated
53:23
now it's not in differential form yeah
53:26
like in
53:26
like uh like we did in the non-linear
53:28
dynamics
53:29
actually but it's in this uh other way
53:32
that you can describe non-linear
53:34
systems but by iterative updating now so
53:38
the new value of k
53:39
prime is given by the old value
53:43
is by this function here applied to the
53:45
old value
53:47
and now now is this is this updating
53:50
scheme here
53:52
and we can expand this term here the
53:54
logarithm
53:55
of the hyperbolic cosine and so for low
53:58
values of this coupling
54:00
this goes with k squared now so normally
54:03
i have an idea
54:05
about how this looks like i have already
54:08
prepared this
54:09
very nice and we can now solve this
54:11
equation here
54:13
graphically so we want to get the flow
54:15
and we can solve this graphically
54:17
and see where this linearization
54:19
procedure
54:20
carries our k our coupling k
54:23
now and because our partition function
54:27
remains invariant
slide 13
54:30
well our k the update of our k
54:35
describes actually the behavior of our
54:37
hamiltonian
54:38
under renormalization
54:41
okay so this is the plot here so what i
54:44
what you do
54:44
is that you plot the left hand side
54:48
of this equation k prime it's just
54:51
linear with slope one
54:54
and uh the right hand side
54:58
now this is uh this here
55:02
and where the left hand side is equal to
55:04
the right hand side there you have a
55:06
fixed point
55:07
now so this is here the way you need to
55:10
read this this is the next value of the
55:11
realization
55:12
this is the previous one if you start
55:14
here we'll go
55:15
here then here and here and here
55:19
and at some point these two lines meet
55:22
and that's
55:22
that's where your fixed point is so this
55:25
humanization procedure
55:27
will bring us to some fixed point which
55:29
happens
55:30
to be uh down here at zero
55:36
yeah and then we can look at how this
55:39
uh we can also then plot
55:42
a flow diagram as i did in a more
55:45
complicated way before
55:47
and how we also did it in the actual
55:49
nonlinear dynamics lecture
55:51
now we can plot it on this in this
55:53
one-dimensional line
55:55
then we have a stable fixed point at
55:57
zero
55:58
now that's here stable
56:02
and any value where we start with our
56:04
course grading procedure
56:06
will be driven to a value of k equals
56:09
zero
56:10
now we start with a very strong coupling
56:12
we renormalize
56:14
and we will be driven to zero
56:17
that means that this renormalization
56:21
procedure this course grading procedure
56:24
in this one-dimensional icing model
56:28
will always on the macroscopic skin on
56:31
large
56:31
length scales always lead to a model
56:35
that is effectively described by
56:40
a system that has zero coupling
56:43
yeah this coupling here vanishes and if
56:46
we have zero coupling that means that
56:48
our system
56:49
is above the critical point that's
56:52
non-critical
56:54
so we will normalize we go on and on
56:57
and we always end up on the system that
56:59
has very high temperature
57:01
or very low coupling yeah that's a it is
57:04
a disordered system
57:06
and that's just a reflection of the fact
57:07
that the one deicing model
57:10
doesn't have any order for a finite
57:12
temperature
57:15
now so you have to start with coupling
57:18
exactly equal to infinity
57:21
to get an order over the temperature
57:24
exactly
57:24
equal to zero only then you can have
57:26
order everything else
57:28
will drive you to this fixed point here
57:32
that corresponds to a system where you
57:35
have no
57:36
coupling at all now so the one that we
57:39
knew that already that the 1d
57:40
system doesn't show all in the ising
57:42
model doesn't have order
57:44
it doesn't have a really critical point
57:47
and that's why our flow
57:48
tells us that macroscopic scales this
57:51
system
57:51
goes to a system that doesn't have
57:55
any interaction so it's completely
57:56
disordered
slide 14
57:59
of course you can do the same procedure
58:01
for um
58:04
for the 2d system yeah and there is of
58:06
course
58:07
again much more complicated then this 2d
58:10
system
58:12
you get a flow diagram that looks like
58:15
this here
58:17
on the bottom so here you suddenly
58:21
have another fixed point an unstable
58:23
fixed point
58:25
in between these two extremes now this
58:28
unstable fixed point
58:30
here if you start to the left of this
58:33
unstable fixed point you were driven to
58:36
a state
58:37
without that that we had previously
58:40
where the coupling is very low
58:42
or that corresponds to the system at
58:43
very high temperature
58:45
if you start to the right of this you'll
58:47
be driven to a state
58:49
where you have order you know where your
58:51
coupling is basically infinity or your
58:53
temperature effective temperature
58:55
is zero and because you have now this
58:58
fixed point here this new fixed point
59:00
right you get
59:02
this singularity or this discontinuity
59:05
of the free energy because if you go a
59:06
little bit to the left
59:08
you go to another a different
59:10
macroscopic state
59:11
then if you go a little bit to the right
59:14
and of course you can test that with
59:15
numerical simulations
59:17
now so that's here from the book of
59:19
cardi which is a very very nice book um
59:22
scaling and renormalization and
59:23
statistical physics
59:26
and i have to say that
59:29
the class don't look very good on this
59:32
ipad
59:34
so what you see here are just
59:36
simulations of the two the eisenmann
59:39
and what they did is they performed one
59:42
block spin removalization procedure
59:45
that's that's what we did right now
59:47
that's the coarse grain so one step of
59:50
this coarse graining
59:52
and uh if you are right at the critical
59:55
point on the left hand side
59:57
you do the coarse graining step yeah
59:59
then
60:00
the system remains invariant if you
60:02
start in a fixed point you'll stay there
60:05
if you start a little bit this course
60:07
grading procedure
60:08
and here that's not there's nothing
60:10
fancier they just took the simulations
60:13
and they did one course grading step
60:15
they averaged you have maybe over a
60:17
block of spins or so
60:19
and if you are above this
60:23
critical temperature that means you
60:24
start here on the left of this fixed
60:26
point
60:27
and if you do this course grading step
60:29
then your system looks more disordered
60:31
than before
60:33
yeah so this year it looks like a higher
60:35
temperature than this year because you
60:37
have a lot of these small domains
60:39
now this is just a reflection of this
60:41
stat like an intuitive
60:43
picture of how you go in this
60:45
reunionization procedure
60:47
to the left to a state that has no
60:49
coupling at all
60:50
k equals zero and if you're coupling a
60:53
zero that's the same as when your
60:54
temperature
60:55
is very large and if you want to read
60:58
more about this
60:59
have a look at the book of john carty
61:01
and there's also of what i just showed
61:03
you
61:04
there is a there is a nice book by um
61:11
so there's a nice book there's a nice
61:13
article by karanov
61:15
um by teaching the immunization group
61:17
and he does these calculations also for
61:19
the 2d
61:20
model okay so now
61:23
we state in a real space yeah and in
61:26
real space
61:28
uh is very intuitive now
61:31
and it works for the one deising model
61:33
for the 2d eisen model gets already
61:34
complicated
61:36
and it's basically impractical to do
61:38
that
61:39
for general um
61:43
for for general physical models now it
61:46
gets very complicated to do that
61:47
procedure
61:48
in real space and the reason is that
61:50
there's no small parameter involved
61:53
that you can use for an expansion
slide 15
61:57
then there was another guy called wilson
61:59
who came up with another idea
62:01
now that was actually and that's called
62:03
the wilson
62:05
momentum shell idea
62:09
that's the world's in momentum shell
62:11
idea so what does it mean so what was
62:14
wilson's idea was that we caused grain
62:18
by integrating out fast degrees of
62:21
freedom
62:22
or degrees of freedom that have a very
62:24
short wavelength
62:26
in fourier space that's the way you do
62:29
that
62:30
is uh you look at free space also this
62:33
is our
62:34
free space let's say we have two
62:35
directions in free space
62:38
then we have here
62:44
a maximum wave vector that's the maximum
62:52
wave vector and this wave vector let's
62:55
call it
62:56
capital omega is the same
63:00
just given by that's the smallest
63:02
structure we can have in the system
63:04
that's a microscopic length scale
63:06
yeah and that's in these lattice systems
63:08
this typical
63:09
uh one over the the lattice the
63:12
microscopic level the lattice spacing
63:15
yeah so we cannot go any smaller than
63:17
that
63:19
now starting from the smallest length
63:21
scale now so a description of our system
63:24
on the smallest length scale
63:27
we now integrate out the blue stuff here
63:32
that's this one here that's the momentum
63:35
shell
63:42
and we integrate out this momentum shell
63:45
until we reach
63:48
a new wavelength a new y vector of a new
63:51
length scale
63:54
omega prime is equal to the original
63:57
omega
63:58
divided sum by some number lambda
64:01
yeah so
64:05
we integrate out one bit in momentum
64:08
space
64:10
at a time and that means that we perform
64:13
an integration
64:15
uh on a momentum shell on a tiny shell
64:18
in momentum space and also our new field
64:26
in the momentum shell
64:32
is then called typically something like
64:35
feel
64:36
find larger of q
64:40
and this is just undefined by
64:43
phi of q
64:46
with q in this interval
64:51
omega over lambda to omega
64:56
yeah so we integrate
65:00
out one step and now wilson's scheme is
65:03
actually very similar
65:08
to what we've done already yes the first
65:10
step will be
65:12
recall screen
65:18
now by rescaling oops sorry
65:29
no by rescaling
65:34
and that will give rise to some
65:38
change in the coefficients
65:46
in the action
65:49
and the second step is that we perform
65:56
this integration in the momentum shell
66:00
integrate out
66:04
short range
66:08
fluctuations
66:12
or momentum
66:19
a second step and we'll in the next
66:22
lecture
66:22
we do exactly this yeah we performed
66:24
this wilson's ruralization group
66:26
procedure that is much more practical
66:29
than the
66:29
block spin immunization group that we
66:32
had in the
66:32
beginning of this lecture and the good
66:36
thing about this wilson's
66:38
of wilson's idea is that it actually has
66:40
a small parameter
66:41
this momentum shell is very small
66:44
and uh yes so this
66:48
has a small parameter that means that we
66:50
can actually then
66:51
hope to get some approximative um
66:55
uh scheme out of this approximative
67:00
so that we can approximate our integrals
67:02
that we get from the course grade
67:04
yeah so we'll do that uh next week
67:07
for our little epidemic model and we'll
67:10
derive
67:11
the renalization group flow from our
67:13
equity epidemic model
67:15
and from this flow we'll then get the
67:17
exponents that derive that describe
67:20
the action of the the the behavior
67:23
of this epidemic model near the critical
67:26
point
67:27
yeah and um
67:30
exactly yeah so so that's what we'll do
67:33
i'll
67:33
just leave it for here today because and
67:35
then next week we do the calculation and
67:37
if you're
67:38
not interested in calculating that
67:41
because it's so
67:42
uh so uh
67:46
so uh short before christmas if you're
67:48
free to skip the next lecture
67:50
yeah and uh officially i think it's not
67:53
a lecture data
67:56
but i wanted to get that done before
67:57
christmas that we can after christmas at
68:00
january what is that fifth or so i can't
68:03
remember
68:03
um we'll start actually done with data
68:06
science and
68:07
to look at some real data yeah okay
68:10
great
68:11
so that was today only the intuitive
68:13
part about romanization so next week
68:15
we'll do
68:15
we'll see that in action and see how it
68:18
actually works in the non-equilibrium
68:19
system
68:22
bye
answering a question
68:28
excuse me yes
68:31
um could you please explain again why
68:33
were we trying to reach the fixed point
68:35
from the critical point
68:37
so so we're not trying to
68:41
it's just that you assume for this to
68:44
work that there is such a fixed point
68:47
that determines the flow of the
68:50
renormalization group yeah in this
68:53
generality you have to assume that that
68:55
there is
68:56
such a fixed point and from nonlinear
68:58
dynamics dynamical system
69:00
lecture that we have we know that once
69:02
we have such a fixed point
69:04
we basically know already how the system
69:07
behaves also in other parts
69:09
of uh of the faith yeah so that's why
69:12
the system these fixed points are so
69:14
important now we have to assume that
69:16
there exists some verb
69:17
uh but we have to basically for every
69:19
individual model that we look at we have
69:21
to show that they actually
69:23
that they that we have actually
69:25
meaningful
69:27
or moral more than one thing of course
69:31
yeah so but once you have the flow it's
69:33
a problem in normal dynamics
69:35
that's that once you have the flow you
69:36
do what you do in non-aerodynamic
69:38
dynamic dynamics so typically in these
69:40
books of linearization they use a
69:42
different language that's a little bit
69:43
disconnected from this dynamical systems
69:47
field
69:48
no but what you do is you have
69:50
non-linear
69:51
differential equations and then you just
69:53
want to see what happens to these
69:55
nonlinear differential equations
69:57
and then if you ask this question then
70:00
and non-in your system you need to ask
70:01
about the fixed points
70:03
and about the stability of yeah and
70:05
that's why this fixed point
70:07
in the randomization group flow is so
70:10
important
70:11
now if there wasn't any fixed point at
70:13
all yeah then
70:14
it would be a very good system so we
70:16
have to have such a big
70:19
point on the critical manifold for this
70:22
uh for this procedure to work
70:25
now we have to assume at this stage here
70:27
we have to assume
70:29
that it exists and if it exists then it
70:31
will determine
70:32
our flow yeah
70:36
but we don't want to we don't want to do
70:38
so once we have to fix the the system
70:40
the renewalization group will
70:41
automatically
70:42
carry us to the fixed point now we don't
70:44
we don't want to go the system to
70:47
the system to go there if the fixed
70:49
point exists
70:50
we'll have to uh we know that the flow
70:54
will determine we will be determined by
70:56
this
70:58
yeah so that's the that's the that's the
71:01
idea
71:02
but there's non-linear systems that in
71:04
principle has not much to do with
71:06
renormalization
71:07
yeah it is a general property of the
71:10
dynamic
71:14
okay thank you okay great
71:24
any other questions
71:28
um i have a question um so
71:32
does this does this um
71:35
require you to have a very
71:38
good understanding of the microscopic
71:42
um dynamics i guess
71:45
you have to have a model to start with i
71:48
mean
71:48
um but that's kind of what i mean is
71:51
if you have some model that's leaving
71:55
out certain things that are
71:57
that maybe
72:01
you know you thought wasn't so important
72:03
or something like that
72:04
but then as you core scream more and
72:08
more
72:08
like does it like um
72:11
will it give rise to like a different rg
72:14
flow than
72:15
if you had included you know other other
72:19
uh okay i think i think that that
72:21
depends on
72:22
what you leave away and what you leave
72:24
out and what you can use so in principle
72:28
for the usual model including this si
72:30
model that we have here
72:32
that we're looking at and also but also
72:34
for the icing model and
72:36
they they fall a little bit out of the
72:37
blue when somebody presents them to you
72:40
and for example in the lecture now they
72:43
completely make sense that they
72:45
destroyed these systems
72:47
but usually people have used
72:50
renormalization studies
72:52
to show that additional terms don't
72:55
don't matter for these yeah also for
72:57
this active
72:58
uh for these active systems that we
73:00
talked about
73:02
with these aligning uh with these
73:04
aligning directions
73:05
and so on now for these systems i showed
73:08
you briefly a larger way equation and
73:09
people did a
73:10
lot of work to show that this is
73:12
actually the simplest
73:14
uh description that you can have that
73:15
describes this system now because the
73:18
rigid rimmelization they found
73:20
that all other terms so this is the
73:22
minimal set of terms that i need
73:24
to describe still the same
73:26
renewalization
73:27
or still renumerization yeah
73:31
and uh whether you get new terms i
73:34
so i i would i wouldn't expect that if
73:35
you
73:37
i wouldn't expect to get new relevant
73:39
terms
73:40
out of nothing yeah in general you know
73:43
otherwise you could start with just
73:45
nothing at all and see what happens and
73:47
then you get like a theory of everything
73:49
i wouldn't expect these things to pop up
73:52
yeah but of course you get all kind of
73:53
messy things
73:55
now you can have if you start with the
73:57
icing more then in reality then you get
73:59
higher order
74:00
interactions and so on and
74:04
and you have to basically have to get
74:06
rid of them
74:08
okay i think let's uh okay great
74:18
okay so if there are no more questions
74:20
then uh
74:21
see you all next week so as as
74:25
as usual next week i'll uh
74:28
start with a repetition of this and
74:31
explain explain it again before we
74:33
actually do the
74:34
real calculation okay bye see you next
74:40
week
74:45
you