Zeta Function Regularization
Imperial College London, Department of Theoretical Physics
icolas M Robles
Supervisor: Dr Arttu Rajantie
Date
Submitted in partial fulfilment of the requirements for the degree of Master of Science in Quantum Fields and Fundamental Forces of Imperial College London
Table of Contents 3
Introduction Chapter I:
An Introduction to Special Functions
7
Chapter II:
Zeta Regularization in Quantum Mechanics
37
Chapter III:
Path Integrals in Field Theory
61
Chapter IV:
Zeta Regularization in Field Theory
76
Appendix
101
References
105
Introduction Special functions have arisen constantly in mathematical and theoretical physics throughout the XIX and XX centuries. Indeed, the study of theoretical physics is plagued with special functions: gamma, Dirac, Legendre, Bessel functions – to name a few – are abundant in the indices of most modern treatises on physics. The most common special functions have been the ever-present gamma function and those which are solutions to differential equations. The zeta function is defined for a complex variable s as ∞
1 s n =1 n
ζ(s) = ∑
for Re( s ) > 1 and analytically continued to the whole complex plane except at s = 1 where it has a simple pole with residue 1. It is not a solution to any differential equation [1] which sets it apart from other special functions which have a more transparent physical meaning. Traditionally, the Riemann zeta function has had its implications in analytic number theory and especially in the distribution of prime numbers. As such it has been regarded mostly as a function that fell completely within the realm of pure mathematics and it was temporarily excluded. This changed in the last half of the XX century when papers from S. Hawking [2], S. Elizalde, S. Odintsov and A. Romeo [3] explained how the zeta regularization assigns finite values to superficially divergent sums. At a more academic level however, zeta regularization is hardly ever mentioned in undergraduate quantum mechanics books, nor is it mentioned either in Peskin and Schroeder or in Weinberg which are some of the standard books on quantum field theory. It is precisely in QFT where the zeta function becomes apparent. In this MSc dissertation we will explain how the zeta function comes into play in both quantum mechanics and quantum field theory. In order to achieve this, the dissertation will be divided in four chapters as follows.
The first chapter will be devoted to the study of the gamma and zeta functions. The lack of a course on special functions at Imperial College gives us a welcome opportunity to discuss thoroughly these functions. The treatment will be rigorous definition-theorem-proof style and the only essential prerequisites are those of complex analysis: convergence, analytic continuation and residue calculus. The functional equation is of particular importance
ζ ( s ) = 2(2π) s −1 Γ (1 − s )sin
πs ζ (1 − s ), 2
as well as the formulas
1 ζ (0) = − , 2
1 ζ '(0) = − log(2π), 2
which will be used time and again. As we have just stated, zeta regularization consists in assigning finite values to superficially divergent sums and this is precisely what the two above formulas do, since when written formally we have the odd looking sums
1 ζ (0) = 1 + 1 + ⋯ = − , 2
1 ζ '(0) = log1 + log 2 + ⋯ = log 2π. 2
Of course, this needs a note of clarification. The zeta function can be written as ∞ 1 1 (−1) n +1 ζ(s) = ∑ s = , ∑ 1 − 21− s n =1 n s n =1 n ∞
These two sums agree on Re( s ) > 1, however, the RHS is the step to analytic continuation and it goes to − 12 as s → 0, hence ζ (0) = − 12 in this sense. Indeed, this show that in a certain sense zeta regularization can be thought of as a technique of complex analysis, namely analytic continuation. Because of the equivalence with other regularizations, this principle can be extended to other renormalizations.
We have limited the discussion, however, to the essential aspects of the zeta functions that we will need for the rest of the dissertation and thus avoided the beautiful connection between number theory and the zeta function. The determinant of an operator A can be written as the infinite product of its eigenvalues
log Det A = log ∏ an = Tr log A = ∑ log an . n
n
When we take the operator to be A := d 2 / dt 2 + ⋯ , the zeta function ζ arises naturally by using
ζ A ( s ) := Tr A− s = ∑ n
d ζ A ( s ) = − ∑ log an , ds n
1 , ans
and the functional determinant of the operator is
det A = exp(−ζ 'A (0)). Quantum mechanical partition functions will then be computed via zeta regularization, by taking operator to be
AQM = −
d2 + ω2 d τ2
we shall see that the partition function of a quantum mechanical harmonic oscillator is
Z (β) = Tr e−βH = ∫ dx < x|e −βH | x > = ˆ
ˆ
1 . 2sinh(βω / 2)
We will see how the Riemann zeta function is used in the bosonic case, whereas a slightly more exotic zeta function (Hurwitz) is needed for fermions. The partition function of the harmonic oscillator will be our main example. Furthermore, we will make use of formulas proved in chapter 1 showing the necessity of having discussed it at length. Since fermionic theories demand anti-commutation relations we will need
Grassmann numbers and therefore we will explain these in detail but only in so much as is needed to compute partition functions for fermions. Finally, equipped with all the tools we have developed in the preceding three chapters we will see how special functions can be used in field theory. Because of the way special functions arise in field theory we have devoted a whole chapter to explain the path integral approach to quantum field theory. This approach is complementary to the QFT/AQFT courses from the MSc. The only field theory we shall consider is ϕ4 , however it is important to note that zeta regularization can be applied to more complex field theories and even string theory [3]. We shall keep this section brief and avoid topics such as Feynman diagrams whenever possible. We will consider quantum field theories of a scalar field φ in the presence of an external source J. It will be shown how the generating functional [4]
Z E [ J ] = e − iE [ J ] = < Ω | e − iHT | Ω > = ∫ Dφ exp i ∫ d 4 x( L[φ] + J φ) can be written as
ZE[J ] = E
exp {SEuc [φ( x0 ), J ]} det (−∂ µ∂ µ + m 2 + V ''[φ( x0 )])δ( x1 − x2 )
and can be computed as done in chapter 2 via zeta function regularization as the determinant of the operator is already visible on the denominator of the LHS. By taking more complex operators, which account for field theoretical Lagrangians, such as
AFT = −∂ 2 + m 2 +
λ 2 φ0 ( x ), 2
we shall see how zeta regularization is also used in field theory. We will show that by taking into account first order quantum corrections, the potential in the bare Lagrangian of the ϕ4 field theory is renormalized as [5]
V (φcl ) =
λ 4 λ 2φcl4 φcl2 25 φcl ( x) + − , log 4! 256π2 M2 6
where φcl is the classical field defined in terms of the source J and vacuum state Ω as
φcl ( x) =< Ω | φ( x) | Ω > J . We will show this firstly by constructing a special zeta function ζ A associated with the operator A. The key steps, for both the quantum mechanical and field cases, are the connection between the zeta function of the operator and its eigenvalues,
nπ ζ − d 2 / d τ2 ( s ) = ∑ n =1 β ∞
−2 s
2s
β = ζ (2s), π
as well as the link between the heat function G( x, y, t ) − solution of AxG ( x, y, t ) = − ∂∂t G ( x, y, t ) − and the zeta function ∞
ζ A ( s )Γ ( s ) = ∫ dtt s −1 ∫ d 4 xG ( x, x, t ). 0
We will then discuss how coupling constants evolve in terms of scale dependence and by exploring the analogy between field theory and statistical mechanics we will compute the partition function of a QFT system. This will be the culmination of the formulas we have proved in chapter 1, the techniques developed in chapter 2 and the theory explained in chapter 3. Furthermore, this encapsulates the spirit of the technique we will use repeatedly. Finally, as a means of a check to make sure the technique yields the same results as other regularization techniques we will derive the same results by using more standard methods such as the one-loop expansion approach.
In terms of the knowledge of physics, the only pre-requisite is comes from field theory up to the notes of QFT/AQFT from the MSc in QFFF. The appendix contains some formulas that did not fit in the presentation and makes the whole dissertation almost self-contained. References have been provided in each individual chapter, including pages where the main ideas have been explored. Overall, the literature on zeta regularization is sparse. The main sources for this dissertation have been Grosche and Steiner, Kleinert, Ramond and the superb treatise by Elizalde, Odintsov, and Romeo. It has been an objective to try to put results from different sources in a new light, by clarifying proofs and creating a coherent sent of examples and applications which are related to each other and which are of increasing complexity. Finally, a few remarks which I have not been able to find in the literature have been made concerning a potential relationship between zeta functions and families of elementary particles. Different zeta functions come into play in quantum physics by computing partition functions. As we have said above, the Riemann zeta function is used in bosonic partition functions, whereas a slightly different zeta function is needed for the fermionic partition function. Riemann zeta function ⇔ bosons
Hurwitz zeta function ⇔ fermions
It would be an interesting subject of research to investigate how the zeta function behaves with respect to its corresponding particle, and what kind of knowledge we can extract about the particle given its special zeta function. For instance, a photon has an associated zeta function, and its partition function can be computed by using known facts of this zeta function, on the other hand, known facts about the boson might bring clarifications to properties of this zeta function. The celebrated Riemann hypothesis claims that all the nontrivial zeros of the Riemann zeta function are of the form ζ ( 12 + it ), with t real.
Hence, another example would be to consider an operator which brings the zeta function to the form
ζ( 12 + it ) and investigate the behaviour of zeta function with respect to this operator. This would mean a ‘translation’ of the Riemann hypothesis and a study of its physical interpretations.
REFECES [1] J. Brian Conrey “The Riemann Hypothesis” http://www.ams.org/notices/200303/fea-conreyweb.pdf [2] Stephen Hawking “Zeta function regularization of path integrals in curved spacetime” http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.c mp/1103900982 [3] Elizalde, Odintsov, and Romeo “Zeta Regularization Techniques with Applications” [4] Peskin, Michael E. and Schroeder, Daniel V. “An Introduction to Quantum Field Theory. [5] Ramond, Pierre “Field Theory: A Modern Premier”
An Introduction to Special Functions Often in mathematics, it is more natural to define a function in terms of an integral depending on a parameter rather than through power series. The gamma function is one such case. Traditionally, it can be approached as a Weierstrass product or as a parameter-dependent integral. The approach chosen to introduce the gamma function follows from a course in complex analysis such as the one delivered by Eberhard Freitag and Rolf Busam [2]. We adopt the notation s = σ + it , which was introduced by Riemann in 1859 and which has become the standard in the literature of the zeta-function. Let us define the Euler Gamma function. The integral ∞
Γ( s) := ∫ dtt s −1e − t
(1.1)
0
is well defined and defines a holomorphic function in the right half complex plane, where R e( s ) > 0 . The first lemma generalises the factorial function as follows Lemma 1 For any n ∈ ℤ we have Γ ( n ) = ( n − 1) ! ∞
Proof. Note that Γ (1) = ∫ dte − t = 1, and by integration by parts we have 0
∞
(1.2)
Γ( s + 1) = ∫ dtt s e−t = −t s e −t 0
∞ 0
∞
+ s ∫ dtt s −1e−t = sΓ( s) 0
for any s in the right half-plane. Now, for any positive integer n, we have (1.3)
Γ ( n ) = ( n − 1) Γ ( n + 1) = ( n − 1) ! Γ (1) = ( n − 1) !
so the result is proved.
□
In order to have a complete view of the gamma function we need to extend it to a meromorphic function in the whole complex plane.
+
Lemma 2 Let cn , where n ∈ℤ , be a sequence of complex numbers such that the sum
{
∑
∞ n=0
| cn |
}
converges. Furthermore, let S = −n | n ∈ ℤ + and cn ≠ 0 . Then ∞
f (s) = ∑
(1.4)
n =0
cn s+n
converges absolutely for s ∈ ℂ − S and uniformly on bounded subsets of ℂ − S . The function f is a meromorphic function on ℂ with simple poles at the points in S and the residues are given by
res f ( s ) = cn for any − n ∈ S .
s =− n
Proof. Let us start by finding upper bounds. If | s |< R , then | s + n | ≥ | n − R | for all n ≥ R. Therefore, −1
−1
we have | s + n | ≤| n − R | for | s | < R and n ≥ R. From this we can deduce that for n0 > R we have
∞
(1.5)
∑
n = n0
∞ ∞ cn | cn | |c | 1 ∞ ≤∑ ≤∑ n ≤ ∑ | cn |. s + n n = n0 | s + n | n = n0 n − R n − R n = n0
As such the series ∑ n > R cn /( s + n ) converges absolutely and uniformly ion the disk | s | < R and defines there a holomorphic function. It follows that
∑
∞ n=0
cn /( s + n ) is a meromorphic function on
that disk with simple poles at the points of S on the disk | s | < R. Thus,
∑
∞ n=0
cn /( s + n ) is a
meromorphic function with simple poles at the points in S and for any − n ∈ S we can write
(1.6)
f (s) =
cn ck c + ∑ = n + g ( s ), s + n − k ∈S −{n} s + k s + n
Where g is holomorphic at −n. From this we see that residues are indeed res f ( s) = cn . This s =− n
concludes the proof. Equipped with this lemma we are in a position to extend the
□
Γ function as we wanted.
Theorem 1 The Γ function extends to a meromorphic function on the complex plane. It has simple poles at 0, − 1, − 2, − 3, ⋯ . The residues of
Γ at − k are given by
res Γ ( s ) =
(1.7)
s =− k
( −1) k , k!
for any k ∈ ℤ + . Proof. Let us split the gamma function as ∞
1
Γ( s) = ∫ dtt e + ∫ dtt s −1e−t , s −1 − t
(1.8)
0
1
the second integral converges for any complex s and it is an entire function. Let us expand the exponential function in the first integral 1
1
0
0
∞ (−1)k k ∞ (−1)k (−1)k 1 k + s −1 = t =∑ dtt , ∑ k! k ! ∫0 k! s + k k =0 k =0 k =0 ∞
1
s −1 − t s −1 ∫ dtt e = ∫ dtt ∑
these operations are valid for s ∈ ℂ as the exponential function is entire and converges uniformly on compact sets of the complex plane. The gamma function can now be written in a form where Lemma 2 can be used, i.e. ∞
(1.9)
(−1)k 1 k =0 k ! s + k ∞
Γ(s) = ∫ dtt s −1e−t + ∑ 1
for any s in the right half-plane. By the Lemma, the RHS defines a meromorphic function on the complex plane with simple poles at 0, − 1, − 2, − 3, ⋯ . The residues are given a direct application of the lemma.
□
Theorem 2 For any s ∈ ℂ we have Γ ( s + 1) = s Γ ( s ). Proof. This follows directly from Lemma 1 and Theorem 1.
□
We three-dimensional representation of the gamma function looks like (Wikipedia)
Γ function, and also discovered by Euler, is the Beta
Another important function related to the
function which we proceed to develop as follows. Let R e( p ), R e( q ) > 0 and in the integral that defines the gamma function, make the change of variable t = u 2 to obtain ∞
Γ( p) = ∫ dtt
∞
e = 2∫ duu 2 p −1e−u . 2
p −1 − t
0
0
In an analogous form we have ∞
Γ ( q ) = 2 ∫ dvv 2 q −1e − v
2
0
Multiplying these two together we have ∞∞
Γ( p)Γ(q) = 4∫ ∫ dudve − (u 0 0
2
+ v2 )
u 2 p −1v 2 q −1 ,
and switching to polar coordinates u = r cos θ , v = r sin θ , dudv = rdrd θ ∞ π/ 2
Γ( p )Γ( q ) = 4 ∫
∫ drd θ e
−r2
r 2( p + q )−1 cos 2 p −1 θ sin 2 q −1 θ
0 0
π/2 ∞ π / 2 2 = 2 ∫ dr e − r r 2( p + q ) −1 2 ∫ d θ cos 2 p −1 θ sin 2 q −1 θ = 2Γ( p + q ) ∫ d θ cos 2 p −1 θ sin 2 q −1 θ. 0 0 0
The integral can be simplified by setting z = sin 2 θ π/ 2
1
0
0
2 ∫ d θ cos2 p −1 θ sin 2q −1 θ = ∫ dz z q −1 (1 − z ) p −1.
Next we define 1
B ( p, q) := ∫ dz z p −1 (1 − z ) q −1 0
for R e( p ), R e( q ) > 0, and this gives the identity
B ( p, q ) = B (q, p ) =
Γ ( p )Γ ( q ) . Γ( p + q)
We denote by B the Beta function. Moreover, if 0 < x < 1 we have
Γ( x)Γ(1 − x) Γ( x)Γ(1 − x) = = B( x,1 − x) = ∫ dz z x −1 (1 − z ) − x . Γ(1) 0 1
We will evaluate this integral with an appropriate contour, but first we need to make one last change z = u /( u + 1) which yields
∞
1
∫ dz z
x −1
(1 − z )
0
Lemma 3 For 0 < y < 1 we have
−x
du u x −1 u 1− =∫ x −1 2 (u + 1) (u + 1) u + 1 0
−x
∞
= ∫ du 0
u x −1 . 1+ u
∞
(1.10)
∫ du 0
u− y π = . 1 + u sin πy
Proof. Let us use a keyhole contour, which is accomplished by cutting the complex plane along the positive real axis. On this region we define the function
f (s) =
s− y 1+ s
with argument of s − y equal to 0 on the upper side of the cut. Furthermore, the function f has a first order pole at s = −1 with residue e − πiy . See Figure 1 below.
Figure 1 We are now to integrate this function along the path described in Figure 1: the path goes along the upper side of the cut from ε > 0 to R, then along the circle CR of radius R centred at the origin, then along the side of the cut from R to ε and at the end around the origin via the circle Cε of radius ε also centred at the origin. An application of the Cauchy residue theorem gives
u x−1 z− y u− y z− y = ∫ du + ∫ dz − e−2 πiy ∫ du − ∫ dz . 1 + u CR 1 + z 1 + u Cε 1 + z ε ε R
2πie
−πiy
R
We can get rid of the integrals around the arcs by appropriate estimates. Note that for z ≠ 0 we have the following
| z − y | =| e− y log z | = e− y Re(log z ) = e− y log| z| =| z |− y , z−y | z |− y | z |− y ≤ ≤ , 1 + z | 1 + z | | 1− | z || and the integrals can be estimated
z− y R− y 0, C∫ dz 1 + z ≤ 2π R − 1 R→ →∞ R
z−y ε1− y → 0; ∫ dz 1 + z ≤ 2π 1 − ε ε→ 0 Cε
so that we are left with ∞
(1 − πie−2πiy ) ∫ du 0
u− y = 2πie−πiy . 1+ u
We may re-write this to obtain the final result ∞
(1.11)
(e
−πiy
−e
−πiy
u− y ) ∫ du = 2πi ⇒ 1+ u 0
∞
∫ du 0
u− y π = . 1 + u sin πy
This proves the claim of the lemma.
□
Let us now make the concluding remarks. Theorem 3 (Euler Reflection Formula) For all s ∈ ℂ one has
Γ(s)Γ(1 − s) =
Proof. The lemma we have just proved can be written as
π . sin πs
Γ( x)Γ(1 − x ) =
π π = , sin π(1 − x ) sin πx
for 0 < x < 1. However, both sides of the equation above are meromorphic, hence we have proved the theorem.
□ ∞
∫
Lemma 4 One has Γ( ) = dtt 1 2
0
∞ −1/ 2 − t
e =
∫ dte
−t 2
= π.
−∞
Proof. This follows by substituting s = 1 / 2 in the Euler reflection formula.
□
Theorem 5 The Γ function has no zeroes. Proof. Since s → sin πs is an entire function, the RHS of Theorem 3 has no zeroes, therefore Γ ( s ) = 0 only happens where s → Γ (1 − s ) has poles. However, as we have argued before, the poles of
Γ are at 0, − 1, − 2, − 3, ⋯ so it follows that Γ (1 − s ) must have poles at
1, 2, 3, ⋯ . By the factorial
formula, Γ ( n + 1) = n ! ≠ 0, and so Γ has no zeroes.
□
Intrinsically connected to the Γ function is the Euler γ constant. Let us first define it and prove its existence. Lemma 5 If sn = 1 + 12 + ⋯ + n1 − log n, then
lim n→∞
sn exists. This limit is called the Euler γ constant.
Proof. Consider tn = 1 + 12 +⋯ + n1−1 − log n geometrically, it represents the area of the n − 1 regions n
between the upper Riemann sum and the exact value of ∫ dx x −1 . Therefore tn increases with n. We 1
can write
k + 1 1 tn = ∑ − log k k =1 k n −1
1 1 t = ∑ − log 1 + . k k =1 k ∞
lim n→∞ n
The series on the right converges to a positive constant since
0<
1 1 1 1 1 1 − log 1 + = 2 − 3 + 4 − ⋯ ≤ 2 , k 3k 4k 2k k 2k
and this proves the lemma because limn→∞ sn = limn→∞ tn .
□
−t
Using the fact that (1 − t / n) → e as n →∞ it can be shown that n
n
n
Γ( s ) =
lim n →∞
∫ dtt 0
s −1
n
t lim 1 s −1 n 1 − =n→∞ n ∫ dtt (n − t ) n n 0
and integrating by parts yields
Γ ( s ) = lim n →∞
n 1 nn s 1 n ( n − 1)⋯1 n s 1 2 n lim lim n −1 s + n −1 dtt n t dtt ( − ) = = n →∞ n →∞ ⋯ . n n s ∫0 n n s ( s + 1)⋯ ( s + n − 1) ∫0 s s + 1 s + 2 s + n
Inverting both sides n 1 s s lim s −s −s ( 1) 1 1 = lim sn s + + ⋯ + = sn ∏ n →∞ n →∞ 1 + Γ(s) k 2 n k =1
In order to be limit we need to insert the convergence factor e − s / k to obtain n 1 lim s − s / k lim s (1+1/ 2 +⋯+1/ n − log n ) n s − s / k − s s (1+1/ 2 +⋯+1/ n ) =n →∞ sn e =n →∞ e ∏ 1 + e s∏ 1 + k e . Γ( s ) k k =1 k =1
However by the use of Lemma 5, we know that the sum converges to γ so that
(1.12)
n 1 s −s / k sγ se = lim ∏ n →∞ 1 + e . k Γ(s) k =1
This is known as the Weierstrass product of the Γ function. The Hurwitz zeta function ζ ( s , a ) is initially defined for σ > 1 by the series ∞
(1.13)
1 . s n =0 (n + a )
ζ ( s, a ) = ∑
This is provided that n + a ≠ 0. The reason why we work with a generalized zeta function, rather than with the zeta function itself, is because fermions require this special kind of zeta function for their regularization. Note however that bosons require the Riemann zeta function ( a = 1 ). For the special case of the Riemann zeta function, the 3-d plot looks like (mathworld.com). The discussion presented here of the properties of the Riemann zeta function has its foundations in Titchmarsh [3]. Although the roots of the functional equation go back to Riemann, the development of the Hurwitz zeta can be traced back to Apostol [1] which in turn is taken from Ingham.
Let us now examine the properties of the Hurwitz zeta function. Theorem 4 The series for ζ ( s , a ) converges absolutely for σ > 1. The convergence is uniform in every half-plane σ ≥ 1 + δ , δ > 0, so ζ ( s , a ) is analytic function of s in the half-plane σ > 1. Proof. From the inequalities ∞
∑ (n + a) n =1
−s
∞
∞
n =1
n =1
= ∑ ( n + a ) − σ ≤ ∑ ( n + a ) − (1+δ ) .
all the statements follow and this proves the claim.
□
The analytic continuation of the zeta function to a meromorphic function in the complex plane is more complicated that in the case of the gamma function. Proposition 1 For σ > 1 we have the integral representation ∞
Γ ( s )ζ ( s, a ) = ∫ dx
(1.14)
0
x s −1e − ax . 1 − e− x
In the case of the Riemann zeta function, that is when a = 1 we have ∞
Γ( s)ζ ( s) = ∫ dx
(1.15)
0
x s −1e − x . 1 − e− x
Proof. First we consider the case when s is real and s > 1, then extend the result to complex s by analytic continuation. In the integral for the gamma function we make the change of variable x = ( n + a ) t where n ≥ 0 , and this yields ∞
Γ( s ) = ∫ dxe x −x
0
∞
s −1
= ( n + a ) ∫ dte − ( n+ a )t t s −1 , s
0
which can be re-arranged to ∞
(n + a) Γ( s) = ∫ dte − nt e − at t s −1. −s
0
Next, we sum over all n ≥ 0 and this gives ∞ ∞
ζ ( s, a )Γ( s) = ∑ ∫ dte − nt e − at t s −1 , n =0 0
where the series on the right is convergent if R e( s ) > 1. To finish the proof we need to interchange the sum and the integral signs.
This interchanged is valid by the theory of Lebesgue integration; however we do not proceed to prove this more rigorously because it would take us too far from the subject at matter. Therefore, we may write ∞ ∞
∞
∞
n =0 0
0
n =0
ζ ( s, a)Γ( s) = ∑ ∫ dte− nt e− at t s −1 = ∫ dt ∑ e− nt e − at t s −1.
However, if Im ( s ) = t > 0 we have 0 < e − t < 1 and therefore we may sum ∞
∑e
− nt
n=0
=
1 , 1 − e−t
by geometric summation. Therefore the integrand becomes ∞
∞
∞
0
n =0
0
ζ ( s, a)Γ( s) = ∫ dt ∑ e− nt e− at t s −1 = ∫ dt
e− at t s −1 . 1 − e−t
Now we have the first part of the argument and we need to extend this to all complex s with R e( s ) > 1. To this end, note that both members are analytic for R e( s ) > 1. In order to show that the
right member is analytic we assume 1 + δ ≤ σ ≤ c , where c > 1 and δ > 0. We then have ∞
∞ 1 ∞ e− at t σ−1 e− at t s−1 e− at t σ−1 ∫0 dt 1− e−t ≤ ∫0 dt 1 − e−t = ∫0 dt + ∫1 dt 1 − e−t .
If 0 ≤ t ≤ 1 we have t
σ−1
≤ t δ , and if t ≥ 1 we have
t σ − 1 ≤ t c − 1 . Also since e t − 1 ≥ t for t ≥ 0 we
then have
e − at t σ−1 e(1−a ) t t δ e1−a (1− a ) δ−1 ∫0 dt 1 − e−t ≤ ∫0 dt 1 − e−t ≤ e ∫0 dt t = δ , 1
1
1
and ∞
∫ dt 1
∞
∞
e − at t σ−1 e− at t c −1 e− at t c −1 ≤ dt ≤ dt = ζ (c, a)Γ(c). 1 − e−t ∫1 1 − e−t ∫0 1 − e−t
This proves that the integral in the statement of the theorem converges uniformly in every strip 1 + δ ≤ σ ≤ c , where δ > 0 , and therefore represents an analytic function in every such strip, hence
also in the half-plane σ = R e( s ) > 1. Therefore, by analytic continuation, (1.14) holds for all s with R e( s ) > 1.
□
Consider the keyhole contour C: a loop around the negative real axis as show in Figure 2. The loop is made of three parts C1 , C2 and C3 . The C2 part is a positively oriented circle of radius ε < 2π above the origin, and C1 and C2 are the lower and upper edges of a cut in the z-plane along the negative real axis.
Figure 2 This can be translated into the following parametrizations: z = re − π i on C1 and z = r e π i on C3 where r varies from ε to ∞. Proposition 2 If 0 < a ≤ 1 the function defined by the contour integral
(1.16)
I ( s, a ) =
1 z s −1eaz dz 2πi C∫ 1 − e z
is an entire function of s. Moreover, we have (1.17) if R e( s ) = σ > 1.
ζ ( s , a ) = Γ (1 − s ) I ( s , a )
Proof. Write z s = r s e − π is on C1 and z s = r s e π is on C3 . Let us consider an arbitrary compact disk
| s | < M and we proceed to prove that the integrals along C1 and C3 converge uniformly on every such disk. Since the integrand is an entire function of s, this will prove that the integral I ( s , a ) is entire. Along C1 we have, for r ≥ 1,
| z s −1 | = r σ−1 | e−πi (σ−1+it ) | = r σ−1eπt ≤ r M −1eπM since | s | < M . The same on C3 gives
| z s −1 | = r σ−1 | eπi ( σ−1+it ) | = r σ−1e−πt ≤ r M −1eπM also for r ≥ 1. Therefore, independently on which side of the cut we place ourselves, we have that for r ≥ 1
z s −1eaz r M −1eπM e− ar r M −1eπM e(1−a ) r ≤ = . er − 1 1− ez 1 − e− r However, e r − 1 > e r / 2 when r > lo g 2 so the integrand is bounded by A r M − 1 e − a r where A is a constant depending on M but not on r. The integral
∫
∞
ε
dr r M −1e − ar converges if ε > 0 so this proves
that the integrals along C1 and C3 converge uniformly on every compact disk | s | < M , and hence I ( s , a ) is indeed an entire function of s.
To prove the equation of the theorem, we have to split up the integral as
2πiI ( s, a ) = ∫ dz + ∫ dz + ∫ dz z s −1 g ( z ) C C2 C3 1
where g ( z) = e /(1 − e ). According to the parametrizations we have on C1 and C3 that az
z
g ( z ) = g ( − r ) but on the circle
C2 we write z = εeiθ , where −π ≤ θ ≤ π. This gives us
ε
π
2πiI ( s, a) = ∫ drr e
s −1 − iπs
∞
g (−r ) + i ∫ d θε
∞
( s −1) iθ
−π
= 2i sin(πs) ∫ drr g (−r ) + iε
iθ
ε
∞
s −1
εe g (εe ) + ∫ drr s −1eiπs g (−r ) iθ
π
s
ε
∫ d θe
isθ
g (εeiθ ).
−π
Divide by 2i and names the integrals I1 and I 2
πI (s, a) = sin(πs) I1 (s, ε) + I 2 (s, ε). If we let ε → 0 we see that ∞
r s −1e− ar I ( s, ε) = ∫ dr = Γ( s)ζ( s, a), 1 − e− r 0
lim ε→0 1
as long as σ > 1. In | z | < 2 π the function g is analytic except for a first order pole at z = 0. Therefore z g ( z ) is analytic everywhere inside | z | < 2 π and hence is bounded there, say | g ( z ) | ≤ B / | z |, where
| z | = ε < 2 π and B is a constant. We can then write
| I 2 ( s, ε ) | ≤
εσ 2
π
∫ d θ e
−tθ
−π
B π |t | σ −1 ≤ Be ε . ε
When we let ε → 0 and provided that σ > 1 we find that I 2 (s, ε) → 0 hence we have π I ( s , a ) = sin( π s ) Γ ( s ) ζ ( s , a ).
Finally, by the use of the formula that we proved earlier
Γ ( s ) Γ (1 − s ) = π / sin( π s ) we have a proof of (1.17).
□
Now we have to extend this result for complex numbers such that σ ≤ 1. In the statement that we have just proved the functions I ( s , a ) and Γ (1 − s ) make sense for every complex s, and thus we can use this equation to define ζ ( s , a ) for σ ≤ 1. Definition 2 If σ ≤ 1 we define ζ ( s , a ) by the equation ζ ( s , a ) = I ( s , a ) Γ (1 − s ).
(1.18)
This provides the analytic continuation of ζ ( s , a ) in the entire s-plane. Theorem 5 The function ζ ( s , a ) defined above is analytic for all s except for a simple pole at s = 1 with residue 1. Proof. The function I ( s , a ) is entire so the only possible singularities of ζ ( s , a ) must be the poles of Γ (1 − s ) , and we have shown those to be the points s = 1, 2, 3, ⋯ . However Theorem 1 shows that ζ ( s , a ) is analytic at s = 2, 3, ⋯ , so s = 1 is the only possible pole of ζ ( s , a ).
If s in an integer s = n, the integrand in the contour integral for I ( s , a ) takes the same value on both
C1 and C3 and hence the integrals along C1 and C3 cancel, yielding
I (n, a) =
1 z n−1eaz z n−1eaz dz = res . z =0 1 − e z 2πi C∫2 1 − e z
In this case we have s = 1 and so
e az lim ze az lim z −1 = z →0 = z →0 = lim = −1. z →0 z z z =0 1 − e z 1− e 1− e ez
I (1, a) = res
Finally, the residue of ζ ( s , a ) at s = 1 is computed as lim s →1
lim ( s − 1)ζ ( s, a) = −lim s →1 (1 − s )Γ (1 − s ) I ( s , a ) = − I (1, a ) s →1 Γ (2 − s ) = Γ (1) = 1,
now the claim is complete: ζ ( s , a ) has a simple pole at s = 1 with residue 1.
□
Let us remark that since ζ ( s , a ) is analytic at s = 2, 3, ⋯ , and Γ (1 − s ) has poles at these points, then (2) implies that I ( s , a ) vanishes at these points. Also we have proved that the Riemann zeta function ζ ( s ) is analytic everywhere except for a simple pole at s = 1 with residue 1.
Lemma 6 Let S ( r ) designate the region that remains when we remove from the z-plane all open circular disks of radius r, 0 < r < π , with centres at z = 2 n π i , n = 0, ± 1, ± 2, ⋯ . Then if 0 < a ≤ 1 the function
g ( s) :=
eas 1 − es
is bounded in S ( r ). The bound depends on r. Proof. With our usual notation: s = σ + it we consider the rectangle Q ( r ) with the circle at n = 0 , this rectangle has an indentation as follows
Q(r ) = {s :| σ | ≤ 1,| t | ≤ π,| s | ≥ r} , as shown in Figure 3 below.
Figure 3
The set Q so defined is compact so g is bounded on Q. Also, because of the periodicity
| g ( s + 2 πi ) | = | g ( s ) |, g is bounded in the perforated infinite strip
{s :| σ | ≤ 1,| s − 2nπi | ≥ r, n = 0, ±1, ±2,⋯}. Let us suppose that | σ | ≥ 1 and consider
| g ( s) | =
e as e aσ eaσ = ≤ . 1 − e s | 1 − e s | |1 − eσ | σ
σ
We can examine the numerator and denominator for σ ≥ 1 giving |1 − e | = e − 1 and e a σ ≤ e σ so
| g ( s) | ≤
eσ 1 1 e = ≤ = . σ −σ −1 e −1 1− e 1− e e −1 σ
σ
A similar argument when σ ≤ −1 gives |1 − e | = 1 − e and so
| g (s) | ≤
eσ 1 1 e ≤ ≤ = . σ σ −1 1− e 1− e 1− e e −1
And therefore, as we claimed | g ( s ) | ≤ e /(e − 1) for | σ | ≤ 1.
□
Definition 3 The periodic zeta function is defined as
e 2 πinx , s n =1 n ∞
F ( x , s ) := ∑
where x is real and σ > 1. Let us remark the following properties of the periodic zeta function. It is indeed a periodic function of x with period 1 and F (1, s ) = ζ ( s ). Proposition 3 The series converges absolutely if σ > 1. If x is not an integer the series also converges (conditionally) for σ > 0 .
Proof. This is because for each fixed non-integral x the coefficients have bounded partial sums.
□
Theorem 6 (Hurwitz’s formula) If 0 < a ≤ 1 and σ > 1 we have
(1.19)
ζ (1 − s ) =
Γ ( s ) − πis / 2 (e F ( a, s ) + e πis / 2 F ( − a, s )). (2π) s
If a ≠ 1 this is also valid for σ > 0. Proof. Consider the function defined by the contour integral
(1.20)
I (s, a) :=
1 z s −1eaz dz ∫( ) 1 − ez , 2πi C
where C ( ) is the contour show in Figure 4 and is an integer. It is the same keyhole contour as that of the gamma function, only that it has been rotated for convenience. The poles are located on the yaxis, symmetric to the origin, at multiplies of 2 πi
Figure 4 Let us first prove that if σ < 0 then
I ( s, a) = I ( s, a ).
lim →∞
The method to prove this is to show that the integral along the outer circle tends to 0 as → ∞. On iθ
the outer circle we have z = Re , − π ≤ θ ≤ π , hence
| z s −1 | =| Rs −1eiθ( s −1) | = Rσ−1e−tθ ≤ Rσ−1eπ|t |. The out circle is inside the domain S ( r ) described in the lemma, the integrand is bounded by
Ae π|t | R σ −1 where A is the bound for | g ( s ) | implied by the lemma, hence the whole integral is bounded by 2 πe π|t | R σ which tends to 0 as R tends to infinity as long as σ < 0. Now when we replace s by
1 − s this yields
I (1 − s, a) =lim →∞
lim →∞
1 z − s eaz dz ∫ 1 − ez = I (1 − s, a) 2πi C ( )
for σ > 1. We are left with the problem of computing I (1 − s, a ) which we proceed to do by the use of the Cauchy residue theorem. Formally, n=
I (1 − s , a ) = −
∑
n=
n = − ,n ≠ 0
{
}
res f ( z ) = − ∑ res f ( z ) + res f ( z ) z=n
n =1
z =n
z=− n
and residue are calculated as follows
z − s eaz lim z − s eaz e2 nπia lim z − 2nπi e2 nπia = − π = = − z n i res f ( z) = res ( 2 ) , z →2 nπi z →2 nπi z =n z = 2 nπi 1 − e z 1 − e z (2nπi) s 1 − ez (2nπi)s which in turn gives
I (1 − s , a ) = ∑
(1.20)
n =1
e 2 n πia e −2 nπia + . ∑ (2 n πi ) s n =1 (2 n π i ) s
Now we make the following replacements i − s = e − π is / 2 and (−i)
I (1 − s , a ) =
e − πis / 2 (2 π ) s
−s
e 2 n πia e πis / 2 + ∑ ns (2 π ) s n =1
= eπis / 2 which allow us to write
e −2 nπia ∑ s n =1 (2 n π i )
and we let → ∞
e − πis / 2 e πis / 2 F ( a , s ) + F ( − a , s ). (2 π ) s (2 π ) s
I (1 − s , a ) =
We have thus arrived at the following result
(1.21)
ζ (1 − s, a ) = Γ( s ) I (1 − s, a) =
Γ( s ) −πis / 2 {e F (a, s) + eπis / 2 F (−a, s)}. (2π) s
This proves the claim.
□
The simplest particular case (and the most important one) is when we take a = 1 this gives us the functional equation of the Riemann zeta function
(1.22)
ζ (1 − s ) =
Γ( s ) −πis / 2 Γ( s ) πs e ζ ( s ) + e πis / 2 ζ ( s )} = 2 cos ζ ( s ). s { s (2π) (2π) 2
This is valid for σ > 1 but it also holds for all s by analytic continuation. Another useful formulation can be obtained by switching s with 1 − s
(1.23)
ζ(s) = 2(2π)s −1 Γ(1 − s)sin
πs ζ(1 − s). 2
Let us now see the consequences of this equation. Taking s = 2 n + 1 in (1.22) when n in an integer the cosine factor vanishes and we find the trivial zeroes of ζ ( s ), (1.24)
ζ (−2n) = 0
n = 1, 2, 3, ⋯ .
Later we will need to use certain other values of the Riemann zeta function which we can now compute. In particular the value of ζ ( − n , a ) can be calculated if n is a non-negative integer. Taking s = −n in the formula ζ ( − s , a ) = Γ (1 − s ) I ( s , a ) we find that
ζ ( − n , a ) = Γ (1 + n ) I ( − n , a ) = n ! I ( − n , a ),
where
z − n−1eaz I (−n, a) = res , z z =0 1− e the evaluation of this residue requires special functions of its own (special type of polynomials, rather) which are known as the Bernoulli polynomials. Definition 4 For any complex s we define the functions Bn ( s ) as ∞ B (s) n ze sz =∑ n z z e − 1 n=0 n !
provided that | z | < 2 π. A particular case of the polynomials are the Bernoulli number Bn = Bn (0), i.e. ∞ Bn (0) n z = z . ∑ z e − 1 n=0 n !
Lemma 7 One has the following equations Bn (s) =
yields Bn =
∑
n k =0
∑
n k =0
( nk ) B s k
n− k
. In particular when s = 1 this
( nk ) B . k
Proof. Using a Taylor expansion and comparing coefficients on both sides we have
Bn (s) n z sz ∞ Bn n ∞ sn n z = z e = ∑ z ∑ z , ∑ n! e −1 n =0 n =0 n ! n =0 n ! ∞
n Bn ( s ) B s n−k =∑ k n! k = 0 k ! ( n − k )!
and by passing n ! to the RHS we obtain the lemma.
□
Now we can write the values of the seta function in terms of Bernoulli numbers. Lemma 8 For every integer n ≥ 0 we have
ζ(−n, a) = −
(1.25)
Bn+1 (a) . n +1
Proof. This follows from the previous observation that ζ ( − n , a ) = n ! I ( − n , a ), so we just have to evaluate the integral I by the Cauchy residue theorem.
z − n −1eaz z − n−2 zeaz B (a) − n−2 ∞ Bk (a) k res res = − = − I (−n, a) = res z ∑ z = − n+1 , z z z =0 z =0 z =0 (n + 1)! k! k =0 1− e e −1 and dividing by n ! we have the end of the proof.
□
Lemma 9 The recursion Bn ( s + 1) − Bn ( s ) = ns n −1 is valid for Bernoulli polynomials if n ≥ 1, and in particular for Bernoulli numbers when n > 1 : Bn (0) = Bn (1). Proof. From the identity
z
e( s +1) z e sz − z = ze sz ez − 1 ez −1
it follows that ∞
∑ n=0
∞ Bn ( s + 1) − Bn ( s ) n sn z = ∑ z n +1 , n! n=0 n !
and as we did before, we equate coefficients of z n to obtain the first statement and then set s = 0 to obtain the second statement.
□
Using the definition, the first Bernoulli number is B0 = 1 and the rest can be computed by recursion. We obtain the following values
Building on from the Bernoulli numbers we can construct the polynomials by the use of the lemmas, the first ones as s
Bernoulli polynomial
Bernoulli number
0
1
1
1
s − 12
− 12
2
s 2 − s + 16
1 6
3
s3 − 32 s 2 + 12 s
0
4
s 4 − 2s3 + s 2 − 301
− 301
5
s 5 − 25 s 4 + 35 s3 − 16 s
0
6
s 6 − 3s 5 + 52 s 4 − 12 s 2 + 421
1 42
Note that for n ≥ 0 we have
(1.26)
ζ(−n) = −
Bn+1 n +1
and because of the trivial zeroes of the Riemann zeta function ζ ( 2 n ) = 0 we confirm our observation that the odd Bernoulli numbers are zero, i.e. B2n+1 = 0. Also note
(1.27)
1 ζ(0) = − . 2
Finally we can write a compact formula for the even values of the zeta function in terms of Bernoulli numbers. Theorem 7 Suppose n is a positive integer, then
ζ (2 n ) = ( − 1) n +1
(1.28)
(2 π ) 2 n B2 n . 2(2 n )!
Proof. This follows from the functional equation by setting s = 2 n
ζ(1 − 2n) = 2(2π)−2n Γ(2n)cos(πn)ζ(2n) re-arranging we obtain
−
B2n = 2(2π)−2n (2n − 1)!(−1)n ζ(2n), 2n
from which the result follows.
□
Using the tabulated values given above we have the well-know Euler formulas
ζ (2) =
π2 6
ζ (4) =
π4 90
Let us remark that no such formula for odd values is known and in fact the value of ζ (3) remains one of the most elusive mysteries of modern mathematics. A significant advance was achieved by Roger Apery who was able to show in 1979 that ζ (3) is an irrational number. Theorem 8 ζ '(0) = − 12 log(2π) Proof. We may (1.16) and (1.17) as
ζ(s) 1 (− z ) s −1 1 dz (− z ) s −1 = dz = e z − 1 2πi C∫ z e z − 1 Γ(1 − s ) 2πi C∫ where C is the same contour as that of Figure 2, except shifted to the positive infinity instead of negative infinity and to account for this we have (− z) = exp[s log(− z )]. s
Let us differentiate with respect to s and then set s = 1, we obtain
1 dz ( − z ) log( − z ) . 2 πi C∫ z ez − 1 The integral on the RHS can be split as
1 ε dz ( − z )(log z − iπ) 1 dz ( − z )(log ε + iθ − iπ) 1 ∞ dz ( − z )(log z + iπ) + + ∫ z 2πi +∞ ez − 1 2πi | z∫|= ε z ez − 1 2 πi ∫ε z ez − 1 and writing z = ε e i ( φ + π ) in the middle integral we have ∞
dz log ε dz z 1 π z dφ ⋅ φ z , −∫ z − − z ∫ ∫ e − 1 2πi | z |= ε z e − 1 2πi − π e −1 ε at this point we need to evaluate all three integrals. The first one can be expanded as ∞
∞
∞ ∞ dz e − nz −∫ z = − ∫ dz ∑ e − nz = − ∑ n =1 n =1 − n ε e −1 ε
z =∞
(e−ε ) n n n =1 ∞
= −∑ z =ε
ε2 ε3 ε = log(1 − e−ε ) = log ε − + − ⋯ = log ε + log 1 − + ⋯ . 2 6 2 −1
The second integral is solved by Cauchy’s theorem by noting that at z = 0 we have z(e − 1) →1, z
and therefore we have
−
log ε dz z = − log ε. 2πi | z∫|=ε z e z − 1
Finally, the third integral goes to zero as ε → 0. Putting all these facts together we have shown that
1 dz ( − z ) log( − z ) = log ε − log ε + ⋯ = 0. 2 πi C∫ z ez − 1 Re-arranging the function equation (1.23)
ζ(s) πs = 2(2 π) s −1 ζ (1 − s ) sin 2 Γ (1 − s ) and because the derivative at s = 1 is zero as we have just shown, its logarithmic derivative
log(2 π) −
ζ '(1 − s ) π cos( πs / 2) + ζ (1 − s ) 2 sin( πs / 2)
must also be 0 at s = 1, and consequently
log(2 π) =
(1.29)
ζ '(0) , ζ (0)
finally yielding the result of Theorem 8 by use of (1.27).
□
Theorem 9 ζ (0, 12 ) = 0 ζ '(0, 12 ) = − 12 log 2. Proof. We have the following identity ∞ 1 1 ζ(s,1/ 2) + ζ( s) = 2s ∑ + = 2s ζ(s), s s (2n) n =1 (2n − 1)
from which it follows that
(1.30)
ζ(s,1/ 2) = (2s − 1)ζ(s)
and hence the first formula is shown. By differentiating (1.30) with respect to s we have the second formula of the theorem.
□
REFERECES There exist many sources for the development of the Riemann zeta function. The most comprehensive one is by E.C. Titchmarsh (and revised by D.R. Heath-Brown). Here we have followed the course on
analytic number theory by Tom Apostol [2] for the presentation of the zeta function and its analytic continuation. The part of the gamma function follows from the course on Complex Analysis by Freitag and Busam and Whitaker [1] and Watson’s Modern Analysis [4].
Zeta Regularization in Quantum Mechanics One of the first instances where the Riemann zeta function occurs in quantum physics is in the path integral development of the harmonic oscillator; specifically it takes place when computing the partition function of the spectrum of the harmonic oscillator. We will follow Exercise 9.2 p 312 from Peskin and Schroeder and the path integral development from Chapter 2 of Kleinert. The action of the one-dimensional harmonic oscillator is given by tf
(2.1)
S = ∫ dtL, ti
where the Lagrangian is
(2.2)
1 1 L = mxɺ 2 − mω2 x2 . 2 2
As we know from the functional approach to quantum mechanics, the transition amplitude is the functional integral
(2.3)
x f , t f | xi , ti = ∫ DxeiS [ x (t )] .
The extremum of S, xc (t ), satisfies
(2.4)
δS [ x ] = 0. δx x = xc ( t )
We now proceed to expand the action around xc (t ). This indicates that xc (t ) is the classical trajectory connecting both space-time points of the amplitude and therefore it satisfies the Euler-Lagrange equation
(2.5)
ɺɺ xc + ω2 xc = 0.
The solution to the equation above with conditions xc (ti ) = xi and xc (t f ) = x f is
xc (t ) = (sin ωT )−1[ x f sin ω(t − ti ) + xi sin ω(t f − t )],
(2.6)
where T = t f − ti . We next plug this solution into the action S
Sc := S [ xc ] =
(2.7)
mω [( x 2f + xi2 ) cos ωT − 2 x f xi ]. 2sin ωT
As we intended originally, we now expand S [ x ] around x = xc to obtain
(2.8)
S [ xc + y ] = S [ xc ] + ∫ dty (t )
δS [ x ] 1 δ2 S[ x] + ∫ dt1dt2 y (t1 ) y (t2 ) δx (t ) x = xc 2! δx (t1 )δx (t2 )
x = xc
where y ( t ) satisfies the boundary condition
y (ti ) = y (t f ) = 0.
(2.9)
The expansion ends at second order because the action is in second order in x. Noting that δ S [ x ] / δ x = 0 at x =
(2.10)
xc we are left with the first and last terms only. Because the expansion is finite 1 δ 2 S [ x] S [ xc + y ] = S [ xc ] + ∫ dt1dt2 y (t1 ) y (t2 ) 2! δx (t1 )δx (t2 )
x = xc
the problem can be solved analytically. Let us now compute the second order functional derivative in the integrand; this can be accomplished as follows t
(2.11)
f d2 δ 1 1 2 2 2 2 ɺ − ω = − + ω dt mx ( t ) m x ( t ) m x (t1 ). 2 ∫ 2 δx (t1 ) ti 2 dt1
Next, using the rule
δx (t1 ) = δ(t1 − t 2 ) δ x (t 2 )
(2.12)
we obtain the following expression for the second order functional derivative
d2 δ2 S [ x ] = −m 2 + ω2 δ(t1 − t2 ). δx(t1 )δx(t2 ) dt1
(2.13)
Plugging this back into the equation for the expansion, we can use the delta function to get rid of the t variables
(2.14)
S [ xc + y ] = S [ xc ] −
d2 m m δ − + ω2 = S [ xc ] + ∫ dt ( yɺ 2 − ω2 y 2 ), dt dt y ( t ) y ( t ) ( t t ) 1 2 1 2 1 2 2 ∫ 2! 2! dt1
where we have simplified the expression by using (2.9). A crucial point to be made is that because Dx is invariant, we may replace it by D y and this will give us
(2.15)
x f , t f | xi , ti = e
iS [ xc ]
m tf 2 2 2 ɺ Dy i dt y y exp ( − ω ) . ∫ ∫ 2 ti y ( ti ) = y ( t f ) = 0
The fluctuation part (integral at y (0) = y (T ) = 0)
(2.16)
I f :=
mT 2 2 2 Dy exp i ∫ dt ( yɺ − ω y ) ∫ y (0) = y (T ) =0 20
is computed as follows. First, let us shift the variables so that time start at 0 and ends at T. We Fourier expand y ( t ) as ∞
(2.17)
y (t ) = ∑ an sin n =1
nπ t . T
This choice satisfies the (2.9). The integral in the exponential gives 2 T ∞ 2 nπ a n − ω 2 . ∫0 dt ( yɺ − ω y ) = 2 ∑ n =1 T
T
(2.18)
2
2
2
In order to have a well-defined transformation we must check that the number of variables is the same before and after the transformation. Indeed, the Fourier transformation from y ( t ) to {an } may be thought of as a change of variables in the integration. To check this, take the number of the time slice to be + 1 , including both t = 0 and t = T , for which there exist − 1 independent y k variables. Therefore, we must set an = 0 for all n > − 1. Next, we compute the corresponding Jacobian. Denote by tk the kth time slice when the interval [ 0,T ] is split into infinitesimal parts, then
(2.19)
J = det
∂yk nπtk = det sin ∂an T
.
We evaluate the Jacobian for the easiest possible case, which is that of the free particle. Therefore, let us make a digression to evaluate the above mentioned probability amplitude. For a free 2 particle, the Lagrangian is L = 12 mxɺ . Carefully derived solutions to the free particle can be found
in Chapter 2 of Kleinert as well as in Chapter 3 of Grosche and Steiner. The amplitude is computed by first noting that the Hamiltonian is given
H = pxɺ − L =
(2.20)
p2 , 2m
so that
(2.21)
x f , t f | xi , ti = x f |e − iHT | xi = ∫ dp x f |e− iHT | p ˆ
=∫
dp ip ( x f − xi ) − iTp2 /(2 m ) e e = 2π
ˆ
p| xi
im( x f − xi ) 2 m exp , 2πiT 2T
where T = t f − ti as noted before and here ε denotes the discretization of time ε = T / . Similarly to the theory of functional integration we have amplitude
(2.22)
x f , t f | xi , ti =
lim n →∞
m 2πiε
n/2
n m xk − xk −1 2 . ∫ dx1 ⋯dxn −1 exp iε∑ ε k =1 2
1/ 2
m We now change the coordinates to yk = 2ε
(2.23)
x f , t f | xi , ti =
lim n →∞
m 2πiε
n/2
xk so that the amplitude becomes
2ε m
( n −1) / 2
n dy dy ⋯ exp ( yk − yk −1 )2 . ∫ 1 n−1 i∑ k =1
In the appendix, we prove by induction that
(2.24)
(iπ) n −1 n 2 ⋯ exp ( ) − = dy dy i y y k k −1 n ∫ 1 n −1 ∑ k =1
1/ 2 2
e i ( yn − y0 ) / n .
The expression reduces to
(2.25)
x f , t f | xi , ti =
lim n →∞
=
m 2πiε
n/2
2ε m
( n −1) / 2
1 n
e
im ( x f − x0 ) 2 /(2 nε )
m im exp ( x f − x0 )2 . 2πiT 2T
Taking (2.25) into account we arrive at the 1/ 2
(2.26)
1 x f , T | xi , 0 = 2πiT
When we write this in terms of a path integral we obtain
(2.27)
e
iS [ xc ]
1/ 2
im 1 exp ( x f − xi ) 2 = 2T 2πiT
m tf 2 ɺ Dy exp i dty ∫ ∫ . y (0) = y (T ) = 0 2 ti
eiS [ xc ] .
Now, from (2.18) we have T an2 n 2 π 2 m 2 ɺ dt y → m ∑ 2 ∫0 4T n =1
(2.28)
and when we compare both path integral expressions one has the equality
m tf = Dy exp i ∫ dtyɺ 2 ∫ 2 ti y (0) = y (T ) = 0
1/ 2
1 2πiT
(2.29)
1/ 2
=
lim →∞
1 J 2πiε
−1 an2 n 2 π2 . ∫ da1 …da −1 exp im∑ 4T n =1
Now comes the process of evaluating the Gaussian integrals 1/ 2
1 2πiT
(2.30)
/ 2 −1
=
lim →∞
1 J 2πiε
=
lim →∞
1 J 2πiε
1/ 2
1 4πiT ∏ 2 n =1 n π
/2
1 4πiT ( − 1)! π 2
( −1) / 2
and from this we obtain a formula for the Jacobian
J = − / 2 2 − ( −1) / 2 π −1 ( − 1)! → ∞.
(2.31)
→∞
The Jacobian is divergent, but this divergence is not relevant because J is combined with other divergent factors. Let us now return to the original problem of the probability amplitude of a harmonic oscillator. The amplitude was
(2.32)
x f , T | xi , 0 =
lim →∞
1 J 2πiT
/2
e
iS [ xc ]
mT … exp da da i 1 1 − ∫ 4
nπ 2 a − ω2 . ∑ n =1 T −1
2 n
As we did with the free particle, we carry out the computation of the Gaussian integrals using the formula
(2.33)
imT 2 nπ 2 4iT 1/ 2 ωT 2 2 da a exp − ω = 2 1 − ∫ n 4 n T πn nπ
−1/ 2
.
This breaks the amplitude (2.32) into smaller parts
(2.34)
x f , t f | xi , ti =
lim →∞
=
lim →∞
J 2 πiT
/2
e
1/ 2
1 2 πiT
1 4iT 1/ 2 −1 ωT 2 ∏ ∏ 1 − k =1 k π n =1 nπ −1
iS [ xc ]
ωT 2 1 − ∏ n =1 nπ −1
e
iS [ xc ]
−1/ 2
−1/ 2
.
It can be shown that his product is equal to
ωT 2 sin ωT . 1 − ∏ = π ω n T n =1
(2.35)
lim →∞
The divergence of J cancels the divergence of the other terms and therefore we are left with a finite value confirming what we stated above concerning the irrelevance of J ∞ . When we insert the value of the product we arrive at the final result 1/ 2
(2.36)
ω x f , t f | xi , ti = π ω 2 i sin T
eiS [ xc ]
1/ 2
ω = 2πi sin ωT
iω exp ( x 2f + xi2 ) cos ωT − 2 xi x f } . { 2sin ωT
If we have a Hamiltonian Hˆ whose spectrum is bounded from below then, by adding a positive constant to the Hamiltonian, we can make Hˆ positive definite, i.e.
(2.37)
spec Hˆ = {0 < E0 ≤ E1 ≤ ⋯ ≤ En ≤ ⋯} .
ˆ −iHt
Also we assume that the ground state is not degenerate. The spectral decomposition of e
is
e − iHt = ∑ e − iEn t n n ˆ
(2.38)
n
and this decomposition is analytic in the lower half-plane of t, where we have Hˆ n = En n . As we have done when evaluating the Gaussian integrals we introduce the Wick rotation t = −iτ ˆ −iHt
where τ is real and positive, this gives us xɺ = idx / d τ and e
ˆ
= e− H τ so that
τf τf 1 dx 2 1 dx 2 1 2 ɺ (2.39) i ∫ dt mx − V ( x) = i (−i) ∫ dt − m − V ( x) = − ∫ dt m + V ( x) 2 ti τi τi 2 d τ 2 d τ tf
Consequently, the path integral becomes
(2.40)
x f , t f | xi , ti = x f , t f |e
− Hˆ ( τ f −τi )
τ f 1 dx 2 | xi , ti = ∫ Dx exp − ∫ d τ m + V ( x) , τi 2 d τ
where D x is the integration measure in the imaginary time τ. Equation (2.40) shows the connection between the functional approach and statistical mechanics. ˆ
Let us now define partition function [see Kleinert, p.77] of a Hamiltonian Hˆ as Z (β) = Tr e −βH , where β is a positive constant and the trace is over the Hilbert space associated with Hˆ .
{
This partition can be written in terms of eigenstates of energy En
(2.41)
Hˆ E n = En E n ,
} with
E m | E n = δ mn .
In this case
(2.42)
Z (β) = ∑ En |e−βH | En = ∑ En |e−βEn | En = ∑e−βEn , ˆ
n
n
n
or in terms of the eigenvector x of the position operator xˆ ,
Z (β) = ∫ dx x|e −β H | x . ˆ
(2.43)
Initially we had an arbitrary β but if we set it to be β = iT we find that
ˆ
ˆ
x f |e − iHT | xi = x f |e −βH | xi ,
(2.44)
and from this we have the path integral expression of the partition function,
(2.45)
β 1 2 Z (β) = ∫ dy Dx exp − ∫ d τ mxɺ + V ( x) ∫ x (0) = x ( β ) = y 0 2
=
β 1 2 ɺ Dx d mx V x exp − τ + ( ) , ∫ ∫ 2 periodic 0
where the periodic integral indicates that the integral is over all paths which are periodic in the interval [ 0 , β ]. When we apply this to the harmonic oscillator, the partition function is simply
(2.46)
∞
Tr e −β H = ∑ e −β ( n +1/ 2) ω . ˆ
n=0
Although there are many ways of evaluating the partition function, here we choose one where the use of the zeta regularization is illustrated. Proceed as follows. Set imaginary time τ = it to obtain the path integral
i 1 d2 d2 2 2 (2.47) Dy exp dt y y Dy exp d y y − − ω → − τ − + ω , ∫ ∫ dt 2 ∫ ∫ dt 2 y (0) = y (T ) = 0 y (0) = y ( β ) = 0 2 2 in this case Dy indicates the path integration measure with imaginary time.
Suppose we have an n × n Hermitian matrix M with positive-definite eigenvalues λk where 1 ≤ k ≤ n then we show in the appendix that n ∞ 1 1 πn / 2 n/2 dx exp x A x . − = π = ∏ ∏ p pq q ∫ k 2 ∑ λ k det M p ,q k =1 −∞ k =1 n
(2.48)
This is a matrix generalization of the scalar Gaussian integral ∞
1
∫ dx exp − 2 λx
−∞
2
2π , λ > 0. = λ
The next task is to define the determinant of an operator O by the infinite product of its eigenvalues
λ k . This is accomplished by setting Det O = ∏ k λ k . Note that Det with capital d denotes the determinant of an operator, whereas with a small d it denotes the determinant of a matrix, same applies for Tr and tr. Using this we can we can write the integral over imaginary time as
(2.49)
1 d2 d2 2 2 Dy exp d y y Det − τ − + ω = − + ω D 2 2 ∫ ∫ dt dτ y (0) = y ( β ) = 0 2
−1/ 2
here the D denotes the Dirichlet boundary condition y (0) = y ( β ) = 0. Similarly to what we did with (2.18) we see that the general solution
(2.50)
y (τ) =
1 ∞ nπτ yn sin . ∑ β β n=1
We are restricted to having the coefficients yn real as y is a real function. We are now in a position to write formal expressions. Knowing that the eigenvalues of the eigenfuction sin ( n π τ / β ) are λ n = (nπ / β) + ω we may write the determinant of the operator as 2
2
(2.51)
2 ∞ ∞ ∞ nπ 2 ∞ βω 2 d2 nπ 2 2 Det D − 2 + ω = ∏ λ n = ∏ + ω = ∏ ∏ 1 + . m π n =1 β n =1 β m =1 dτ n =1
It is now time to identify the first infinite product with the functional determinant, i.e. 2
(2.52)
∞ d2 nπ Det D − 2 ↔ ∏ . n =1 β dτ
Here is where the zeta-function comes into play. Suppose O is an operator with positive-define eigenvalues λ n . Following Grosche and Steiner, in this case, we take the log
(2.53)
log Det O = log ∏ λ n = Tr log O = ∑ log λ n . n
n
Now we define the spectral zeta-function as
ζ O ( s ) := ∑ λ n− s .
(2.54)
n
The sum converges for sufficiently large Re(s) and ζ O ( s ) is analytic in s in this region. Additionally, it can be analytically continued to the whole s-plane except at a possible finite number of points. The derivative of the spectral zeta-function is linked to functional determinant by
(2.55)
d ζO ( s) = −∑ log λ n . ds s =0 n
And therefore the expression for Det O is
(2.56)
d ζ ( s) Det O = exp − O . ds s=0
The operator we are interested in is O = − d 2 / d τ 2 so this yields
nπ ζ − d 2 / d τ2 ( s ) = ∑ n =1 β ∞
(2.57)
−2 s
2s
β = ζ(2s). π
As we proved in Chapter 1, the zeta function is analytic over the whole complex s-plane except at the simple pole at s = 1. The values
ζ(0) = −
1 2
1 ζ '(0) = − log(2π) 2
were also calculated, and we can use them now to obtain
β ζ '− d 2 / d τ2 (0) = 2 log ζ (0) + 2ζ '(0) = − log(2β). π
(2.58)
Putting this into the expression for the determinant with Dirichlet conditions we have
d 2 log(2β) Det D − 2 = e = 2β dτ
(2.59)
and finally 2 ∞ d2 βω 2 Det D − 2 + ω = 2β∏ 1 + . π p p =1 dτ
(2.60)
Let us go back to the partition function
(2.61)
Tr e
−β Hˆ
∞
= ∑e
−β ( n +1/ 2) ω
n =0
∞ βω 2 = 2β∏ 1 + p =1 pπ
π = 2β sinh βω βω
−1/ 2
−1/ 2
π ω tanh(βω / 2)
1/ 2
π ω tanh(βω / 2)
=
1/ 2
1 . 2sinh(βω / 2)
It is important to note that there is a more direct way of computing this partition function and which is more satisfying for solvable cases such as the harmonic oscillator but which fails under more complicated Lagrangians. This can be done by computing (2.43)
Z (β) = Tr e−βH = ∫ dx < x|e−βH | x >. ˆ
ˆ
Recall that 1/ 2
ω x f , t f | xi , ti = 2πi sin ωT
iω exp ( x 2f + xi2 ) cos ωT − 2 xi x f } , { 2 sin ωT
so that 1/ 2
ω Z (β) = 2πi (−i sinh βω) 1/ 2
ω = 2π sinh βω
ω
∫ dx exp i −2i sinh βω (2 x 1/ 2
π ω tanh βω / 2
=
2
cosh βω − 2 x 2 )
1 . 2sinh(βω / 2)
The quantisation of bosonic particles is done by using commutation relations, however, the quantisation of fermionic particles require a more different approach, namely that of anticommutation relations. This in turn requires anti-commuting numbers which are called Grassmann numbers. In analogy with the bosonic harmonic oscillator which was described by the Hamiltonian,
(2.62)
H = 12 (a† a − aa† ),
where a and a † satisfy the commutation relations
(2.63)
[a, a† ] = 1
[a, a] = [a† , a† ] = 0.
The Hamiltonian has eigenvalues (n + 12 )ω where n is an integer with eigenvector n
(2.64)
H n = (n + 12 )ω n .
From now we drop the hat notation in the operators whenever there is no risk for confusion with the eigenvalue.
The prescription for the fermionic Hamiltonian is to set
H = 12 (c†c − 1/ 2)ω.
(2.65)
This may be thought of as a Fourier component of the Dirac Hamiltonian, which describes relativistic fermions. However, it is evident that if c and c † were to satisfy commutation relations then the Hamiltonian would be a constant, and therefore it is more appropriate to consider anti-commutation relations
{c, c } = 1 †
(2.65)
{c, c} = {c† , c† } = 0.
In this case the Hamiltonian becomes
H = 12 c† c − (1 − c† c) ω = ( − 12 ) ω,
(2.66)
where = c † c . The eigenvalues of are either 0 or 1 since
2 = c†cc†c = ⇔ ( −1) = 0. Next we need a description of the Hilbert space of the Hamiltonian. To this end, let n be an eigenvector of H with eigenvalue n (necessarily n = 0,1 ). Then the follow equations hold
H 0 =−
(2.67)
c† 0 = 1
ω 0 2
c† 1 = 0
H1 =
ω 1 2
c 0 =0
c1 = 0 .
For the sake of convenience we introduce the spin-notation
(2.68)
0 0 = 1
1 1 = . 0
When the basis vectors of the space have this form then the operators take the matrix representations
0 0 c= 1 0
(2.69)
0 1 c† = 0 0
1 0 = 0 0
H=
ω1 0 . 2 0 −1
Instead of having the bosonic commutation relation [ x , p ] = i we now have [ x , p ] = 0 . The anticommutation relation {c, c } = 1 †
is replaced by {θ, θ } = 0 where θ and θ * are anti-commuting *
numbers, i.e. Grassmann numbers which we proceed to develop in further detail in the appendix. The Hamiltonian of the fermionic harmonic oscillator is H = (c c −1/ 2)ω, with eigenvalues ±ω / 2. †
From our previous discussion of the partition function and Grassmann numbers we know that 1
Z (β) = Tr e −βH = ∑ n|e −βH | n = eβω / 2 + e −βω / 2 = 2 cosh(βω / 2).
(2.105)
n=0
As with the bosonic case, we can evaluate Z (β ) in two different ways using the path integral formalism. Let us start with some preliminary propositions. Let H be the Hamiltonian of a fermionic harmonic oscillator, its partition function is written as ∗
Tr e −βH = ∫ d θ∗ d θ −θ|e −βH | θ e − θ θ .
(2.106)
We can show this by the inserting (2.104) into the partition function (2.105) this yields (2.107)
Z (β) =
∑
n = 0,1
n|e −βH | n = ∑ ∫ d θ∗d θ θ θ e −θ θ n|e −βH | n = ∑ ∫ d θ∗d θe −θ θ n | θ θ|e −βH | n ∗
n
∗
n
(
= ∑ ∫ d θ∗ d θ(1 − θ∗θ) ( n |0 + n |1 θ ) 0|e −βH | n + θ∗ 1|e −βH | n n
)
= ∑ ∫ d θ∗ d θ(1 − θ∗θ) n
× 0|e −β H | n n |0 − θ ∗θ 1|e −β H | n n |1 + θ 0|e −β H | n n |1 + θ ∗ 1|e −β H | n n |0 . Note now that the last term of the integrand does not contribute to the integral and therefore we may ∗
substitute θ ∗ to −θ , which implies that (2.108)
Z (β ) = ∑ ∫ d θ∗ d θ (1 − θ∗ θ ) n
× 0|e − β H | n
n | 0 − θ ∗ θ 1|e −β H | n
n |1 + θ 0|e −β H | n
n |1 − θ ∗ 1| e −β H | n
n | 0
∗
= ∫ d θ∗ d θe − θ θ −θ|e −β H | θ . Unlike the bosonic case, we have to impose an anti-periodic boundary over [ 0,β] in the trace since the Grassmann variable is θ when τ = 0 and it is −θ when τ = β. By invoking the expression e−βH =lim →∞ (1 − βH / )
(2.109)
and inserting the completeness relation (2.104) at each step one has the following expression for the partition function (2.110) ∗
∗ −θ θ Z (β) = lim −θ|(1 − β H / ) | θ →∞ ∫ d θ d θe
−1 −1 * ∗ * =lim d θ d θ d θ d θ exp ∏ →∞ ∫ k k −∑ θn θn × −θ|(1 − εH )| θ −1 θ −1|⋯| θ1 θ1|(1 − εH )| θ n =1 k =1
=lim →∞
∫ ∏ dθ k =1
* k
−1 d θk exp − ∑ θn*θn × θ |(1 − εH )| θ −1 θ −1|⋯| θ1 θ1|(1 − εH )| − θ , n =1
where we have the usual conventions we have been using all along
(2.111)
ε = β / and
∗
∗
θ = −θ = θ0 , θ∗ = −θ = θ .
Each matrix element is evaluated (up to first order) as (2.112)
θ |H | θk −1 θk |(1 − εH )| θk −1 = θk | θk −1 1 − ε k θk | θk −1
= θk | θk −1 e−ε θk |H |θk −1 / θk |θk −1 = eθk θk −1 e−εω( θk θk −1 −1/ 2) = eεω / 2 e(1−εω) θk θk −1 . *
*
*
The partition function is now expressed in terms of the path integral as (2.113) −1 −1 * βω / 2 * Z (β) =lim e d θ d θ exp − θ θ exp (1 − εω ) θn*θn ∑ ∏ →∞ k k ∑ n n ∫ n =1 n =1 k =1
=e
βω / 2lim →∞
∏ ∫ dθ k =1
* k
−1 d θk exp −∑{θn* (θn − θn −1 ) + εωθn*θn −1} n =1
* = eβω / 2lim →∞ ∏ ∫ d θk d θk exp( −θ ⋅ B ⋅ θ), †
k =1
where we have the following vector and matrix elements
(2.114)
1 0 ⋯ 0 −y y 1 0 ⋯ 0 ⋯ θ * ) , B = 0 y 1 ⋯ 0 , ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ y 1
θ1 † θ = ⋮ , θ = ( θ1* θ
where y = − 1 + εω . The computation is ended by recalling that from the definition of the Grassmann Gaussian integral one had βω / 2lim βω / 2 (1 + eβω ) = 2cosh (2.115) Z (β) = eβω / 2lim →∞ det B = e →∞ 1 + (1 − βω / ) = e
βω 2
.
As with the bosonic partition function, we can arrive to the same result using the zeta function (a generalization of it), and this will prove useful later. Recall that we showed that (2.115)
Z (β) = e
βω / 2lim →∞
∏ ∫ dθ k =1
* k
†
d θk exp(−θ ⋅ B ⋅ θ) = e
βω / 2
β d * θ θ exp − τθ (1 − εω ) + ω D D d k k θ ∫ ∫ dτ 0 *
d = eβω / 2 Det θ (β ) = − θ (0) (1 − εω) + ω. dτ The subscript θ(β) = −θ(0) indicates that the eigenvalue should be evaluated for the solutions of the anti-periodic boundary condition θ ( β ) = − θ (0). First, we expand the orbit θ ( τ ) in the Fourier modes. The eigenmodes and the corresponding eigenvalues are
πi(2n + 1)τ exp , β
(2.116)
(1 − εω)
πi (2n + 1) + ω, β
where n runs as n = 0, ± 1, ± 2, ⋯ . The number of degrees of freedom is ( = β / ε ) so the coherent states are (over)complete. Since one complex variable has two real degrees of freedom, we need to truncate the product at
− / 4 ≤ k ≤ / 4. Following this prescription, one has
βω / 2lim →∞
(2.117) Z (β) = e
2 βω / 2 −βω / 2 ∞ 2π(n − 1/ 2) π(2n − 1) 2 i (1 ) e e − εω + ω = + ω ∏ ∏ β β k =− / 4 k =1 /4
π(2k − 1) = ∏ β k =1 ∞
2
βω 2 1 + ∏ . (2 1) n π − n =1 ∞
The trouble comes from the first infinite product, P, which is divergent and as such it requires regularization. This can be accomplished as follows
(2.118)
∞ 2π(k − 1/ 2) log P = ∑ 2 log , β k =1
and we define the corresponding zeta-function by
−s
(2.119)
s
∞ ɶζ ( s) = 2π(k − 1/ 2) = β ζ( s,1/ 2), ∑ β 2π k =1
where (see Chapter 1) ∞
(2.120)
ζ ( s, a ) = ∑ (k + a ) − s ,
0 < a < 1,
k =0
is the Hurwitz zeta-function. This gives
P = exp(−2ζɶ '(0)).
(2.121)
So now we are left with the issue of differentiating the ζɶ function at s = 0 , which is done as follows
(2.122)
1 β ζɶ '(0) = log ζ (0,1/ 2) + ζ '(0,1/ 2) = − log 2, 2 2π
since we showed in Chapter 1 that (2.123)
ζ(0, 12 ) = 0
ζ '(0, 12 ) = − 12 log 2.
Putting this together, we obtain the surprising result
(2.124)
P = exp(−2ζɶ '(0)) = elog 2 = 2.
This result indicates that P is independent of β once the regularization is performed. Finally, the partition function is evaluated using all these facts
βω 2 βω Z (β) = 2∏ 1 + , = 2cosh (2 1) 2 n π − n =1 ∞
(2.125)
by the virtue of the formula 2 x x 1 + ∏ = cosh . 2 n =1 π(2n − 1) ∞
(2.126)
This is in agreement with partition function we computed earlier (2.35) and (2.61).
REFERECES The discussion of the role of the zeta function in quantum mechanics and zeta function regularization as explained in the above chapter can be traced back to the Handbook of Feynman Path Integrals C. Grosche and F. Steiner [pages 37 to 44, 55 to 59 for the fermionic case], Peskin and Schroeder [pages 299 to 301] and the Advanced Quantum Field Theory course by Dr Rajantie. It is also complemented by the treatise on path integrals in quantum mechanics from Hagen Kleinert [pages 81 to 83 and 161 to 163].
Path Integrals in Field Theory From the quantum mechanical case we can build a generalization with several degrees of freedom, a field theory. We will exclusively deal with φ , where φ ( x ) is a real scalar field. Let us summarize 4
standard results from field theory from Peskin and Schroder as well as Rajantie. The action is built from the Lagrangian S =
(3.1)
∫ dx L ( φ ( x ), ∂
µ
φ ( x )),
where it is understood that L is the Lagrangian density. The equations of motion (EOM) are given by the Euler-Lagrange equation
∂ ∂L ∂L =− . µ ∂ x ∂ ( ∂ µ φ( x )) ∂φ( x )
(3.2)
From the free scalar field Lagrangian
1 L0 (φ( x ), ∂ µ φ( x )) = − (∂ µ ∂ µ φ + m 2 φ2 ) 2
(3.3)
we can derive the Klein-Gordon equation
(∂ µ ∂ µ − m 2 )φ = 0.
(3.4)
When there is a source J present the vacuum amplitude has functional representation
i < 0, ∞ | 0, −∞ > J = ∫ Dφ exp i ∫ dx L0 + J φ + εφ2 , 2
(3.5)
with the artificial iε added to make sure the integral converges. Integrating by parts we obtain
(3.6)
∫ Dφ exp i ∫ dx L
0
i 1 + J φ + εφ2 = ∫ Dφ exp i ∫ dx {φ(∂ µ∂ µ φ − m2 )φ + iεφ2 } + J φ . 2 2
In this case, the Klein-Gordon becomes the slightly more generalized equation
(∂ µ∂ µ − m2 + iε)φc = − J .
(3.7)
Working in d dimensions and defining the Feynman propagator as
∆( x − y ) =
(3.8)
−1 eik ( x − y ) d d k (2π)d ∫ k 2 + m2 − iε
the solution to the generalized Klein-Gordon equation becomes φ c ( x ) = − ∫ dy ∆ ( x − y ) J ( y ).
(3.9)
The Feynman propagator obeys
(∂ µ∂ µ − m2 + iε)∆( x − y ) = δd ( x − y ).
(3.10)
Hence the vacuum amplitude can be written in terms of the source J as
(3.11)
i < 0, ∞ | 0, −∞ > J = exp − ∫ dxdyJ ( x ) ∆ ( x − y ) J ( y ) 2
or
i Z 0 [ J ] = Z 0 [0]exp − ∫ dxdyJ ( x ) ∆ ( x − y ) J ( y ) . 2
(3.12)
by setting < 0, ∞ | 0, −∞ > := Z0[ J ]. J
The (Feynman) propagator is also computed by the functional derivative of Z0[J]
(3.13)
In order to evaluate
∆( x − y) =
i δ2 Z 0[ J ] Z 0 [0] δJ ( x)δJ ( y )
. J =0
Z0[0] (which is the vacuum to vacuum amplitude when there is no source) we
need to introduce imaginary time x 4 = t = ix 0 and operator □= ∂t + ∇ so that 2
2
(3.14)
−1 / 2 1 Z 0 [0] = ∫ D φ exp ∫ dxφ( □ − m 2 ) φ = Det( □ − m 2 ) , 2
with capital d, the determinant is the product of eigenvalues with corresponding boundary condition. With term sources, the Lagrangian of the free complex scalar field takes the form (3.15)
2
L0 = −∂ µ φ * ∂ µ φ − m 2 φ + J φ * + J * φ ,
and consequently the generating functional becomes
(3.16)
2 Z 0 [ J , J * ] = ∫ D φD φ* exp i ∫ dx ( L0 − iε φ )
= ∫ DφDφ* exp i ∫ dxφ* (□−m2 − iε)φ + J φ* + J *φ, differentiating we obtain the propagator
(3.17)
∆( x − y ) =
i δ2 Z 0[ J , J * ] Z 0[0,0] δJ * ( x)δJ ( y )
We may split the function by virtue of the KG equations (□−m
2
(3.18)
. *
J = J =0
)φ = −J (□−m2 )φ* = −J *
Z0[ J , J * ] = Z0[0,0]exp −i ∫ dxdyJ * ( x)∆( x − y) J ( y)
and by using another Wick rotation we have
(3.19)
−1
Z 0 [0,0] = ∫ DφDφ* exp −i ∫ dxφ* (□− m 2 − iε)φ = Det(□− m 2 ) .
The presence of a potential in the Lagrangian
(3.20)
L(φ, ∂µφ) = L0 (φ, ∂µφ) − V (φ)
comes at a double price: the form of the potential is limited by symmetry and renormalization of the theory but this theory needs to be handled perturbatively.
The potential is usually of the form V (φ) =
α n!
φn where α is a real number that sets the strength of the
interaction. As with the free theory, the generating functional is
(3.21)
1 Z [ J ] = ∫ Dφ exp i ∫ dx φ(□−m2 )φ − V (φ) + J φ 2
= ∫ Dφ exp −i ∫ dxV (φ) exp i ∫ dx ( L0 (φ, ∂µφ) + J φ) δ = exp −i ∫ dxV −i ∫ Dφ exp i ∫ dx ( L0 (φ, ∂ µ φ) + J φ ) J ( x ) δ ∞
= ∑ ∫ dx1 ⋯∫ dxk k =0
δ δ ( −i ) k V −i ⋯V −i Z 0 [ J ]. k ! δJ ( x1 ) δJ ( xk )
The Green function (which is the vacuum expectation of the order time product of field operators)
Gn ( x1 ,⋯, xn ) :=< 0 | T [φ( x1 )⋯ φ( xn )] | 0 >=
(3.22)
(−i ) n δ n Z[ J ] δJ ( x1 )⋯ δJ ( xn ) J =0
is generated by the generating functional Z [ J ]. However, we can see that this is the nth functional derivative of Z [ J ] around J = 0 and therefore we may plug it into the Taylor expansion of the exponential above and we obtain ∞
1 n dxi J ( xi ) < 0 | T [φ( x1 )⋯ φ( xn )] | 0 >=< 0 | T exp ∫ dxJ ( x)φ( x) | 0 > . ∏ ∫ k =1 k ! i =1
(3.23) Z [ J ] = ∑
Connected n-point functions are generated by (3.24)
Z [ J ] = exp ( − W [ J ]),
and the effective action is defined by a Legendre transformation as follows (3.25)
Γ [ φ cl ] := W [ J ] − ∫ dtdx i J φ cl
where
φcl :=< φ > J = δW [ J ]/ δJ .
(3.26)
We will also see that
Γ[φcl ] generates 1-particle irreducible diagrams (see Ramond and Bailin and
Love). It is convenient now to derive the above discussion in a formal manner and with a closer analogy to statistical mechanics. The generating functional of correlation functions for a field theory with Lagrangian L is given by (3.5) Z [ J ] = ∫ D φ exp i ∫ d 4 x ( L + J φ ) ,
(3.27)
where the time variable is contained between –T and T, with T → ∞ (1 − iε ). Furthermore we have the following
(3.28)
< 0 | T φ ( x1 ) φ ( x 2 ) | 0 > = Tlim→ ∞ (1− i ε ) Z [ J ]− 1 ∫ D φφ ( x1 ) φ ( x 2 ) exp i ∫ d 4 x ( L + J φ )
δ δ = Z [ J ]−1 −i −i Z[ J ] . δJ ( x1 ) δJ ( x2 ) J =0 Let us do some manipulations on the time variable; when we derived the path integral formulation of quantum mechanics (see AQFT) it is shown that the time integration was tilted into the complex plane in the direction that would allow the contour of integration to be rotated clockwise onto the imaginary axis. We assume that the original infinitesimal rotation gives the correct imaginary infinitesimal to produce the Feynman propagator. Now, the wick rotation of the time coordinate t → − ix 0 yields a Euclidean 4-vector product 2
2
2
x 2 = t 2 − x → − ( x 0 ) 2 − x = − xE ,
and similarly we assume that the analytic continuation of the time variables in any Green’s function of a quantum field theory produces a correlation function invariant under the rotational symmetry of four-dimensional Euclidean space. Let us now apply this to the φ 4 theory. As we know the action in this case is
1 λ 1 S = ∫ d 4 x ( L + J φ ) = ∫ d 4 x ( ∂ µ φ ) 2 − m 2 φ 2 − φ 4 + J φ , 2 4! 2
(3.29)
and performing the Wick rotation
1 λ 1 i ∫ d 4 xE ( LE − J φ) = i ∫ d 4 xE (∂ E µ φ) 2 − m 2 φ 2 − φ 4 − J φ , 2 4! 2 which in turn gives the Wick-rotated generating functional Z [ J ] = ∫ D φ exp − ∫ d 4 x E ( L E − J φ ) .
(3.30)
The functional
LE[φ] is bounded from below and when the field φ has large amplitude or large
gradient the functional becomes large. These two facts would imply that
LE[φ] has the form of an
energy and consequently it a is a possible candidate for a statistical weight for the fluctuations of φ . Within this light, the Wick rotated functional Z [ J ] is the partition function describing the statistical mechanics of a macroscopic system when approximating the fluctuating variable as a field. Finally, let us push this analogy between field theory and statistical mechanics further by presenting the Green’s function of
(3.31)
φ(xE ) < φ( xE1 )φ( xE 2 ) > = ∫
d 4 kE eikE ⋅( xE1 − xE 2 ) , (2π) 4 kE2 + m2
which is in fact the Feynman propagator evaluated in the spacelike region and this falls off
exp(−m | xE1 − xE 2 |).
This correspondence between quantum field theory and statistical mechanics will play an important part in understanding ultraviolet divergences. Recalling the generating functional of correlation functions, we define an energy functional E [ J ] by Z [ J ] = exp( − iE [ J ]) = ∫ D φ exp i ∫ d 4 x ( L + J φ ) = < Ω | e − iHT | Ω > ,
(3.32)
with the constraints on time explained above. The functional E [ J ] is, as we have said before, the vacuum energy as a function of the external source J. Let us perform the functional derivative of E [ J ] with respect to J ( x )
δ δ log Z = − E[ J ] = i δJ ( x ) δJ ( x )
( ∫ Dφ exp i ∫ d x( L + J φ) ) 4
−1
∫ Dφφ( x ) exp i ∫ d
4
x ( L + J φ)
and set
δ E[ J ] = − < Ω | φ( x ) | Ω > J δJ ( x )
(3.33)
the vacuum expectation value in the presence of a source J. Next, we define -
(3.34)
the classical field as
φcl ( x) =< Ω | φ(x) | Ω >J , a weighted average over all possible fluctuations, and dependent on the source J.
-
(3.35)
the effective action as the Legendre transformation of E [ J ] i.e. as in (3.25)
Γ[φ cl ] := − E[ J ] − ∫ d 4 x ' J ( x ') φ cl ( x ').
By virtue of (3.33) we have the following
δ δ δJ ( x ') Γ[φ cl ] = − E[ J ] − ∫ d 4 x ' φ cl ( x ') − J ( x ) δφ cl ( x ) δφ cl ( x ) δφ cl ( x )
= −∫ d 4 x '
δJ ( x ') δE[ J ] δJ ( x ') − ∫ d 4x ' φcl ( x ') − J ( x ) = − J ( x ) δφ cl ( x ) δJ ( x ') δφcl ( x )
which means that when the action is set to zero, the equation
δ Γ[ φ cl ] = 0 δφ cl ( x )
(3.36)
is satisfied by the effective action. This equation has solutions which are the values of < φ ( x ) > in the stable states of the theory. It will be assumed that the possible vacuum states are invariant under translation and Lorentz transformations. This implies a substantial simplification of (3.36) as for each possible vacuum state the corresponding solution
φcl will be independent of x, and hence it is just
solving an ODE of one variable. Thermodynamically, Γ is proportional, to the volume of the spacetime region over which the functional integral is taken, and therefore, it can be a large quantity. Consequently, in terms of volume V and associated T of the region we may write:
(3.37)
Γ[φcl ] = −(VT )Veff (φcl ),
where Veff is the effective potential. In order that
Γ[φcl ] have an extremum we need the following to
hold
(3.38)
δ Veff ( φcl ) = 0. δφ cl
Each solution of (3.38) is a translation invariant state without source J = 0. Therefore, the effective action (3.35) is –E in this case ( Γ = − E ) and consequently Veff (φcl ) evaluated at the solution of (3.38) is the energy density of the corresponding state.
The effective potential defined by (3.37) and (3.37) yields a function whose minimization defines the exact vacuum sate of the field theory including all effects of quantum corrections. The evaluation of
Veff (φcl ) will follow from the path integral formulation. In order to accomplish this we will follow Peskin and Schroeder’s method which in turn follows from R. Jackiw and dates back to 1974. The idea is to compute the effective action Γ directly from its path integral definition and then obtain Veff by focusing on constant values of φcl . Because we are using renormalized perturbation theory, the Lagrangian
1 1 λ L = (∂ µφ) 2 − m02φ2 − 0 φ4 2 2 4! ought to be split as
L = L1 + δL, which is analogous to the split of the Lagrangian in renormalized φ 4 theory done in AQFT (see Rajantie and Bailin and Love 7.4), i.e. rescaling the field φ = Z
1/ 2
φr where Z is the residue in the
LSZ reduction formula
∫d
4
x < Ω | T φ( x ) φ(0) | Ω > e ip ⋅ x =
iZ + (terms regular at p 2 = m 2 ), 2 p −m 2
m being the physical mass. This rescale changes the Lagrangian into
1 1 λ 1 1 δ L = (∂ µ φr ) 2 − m02 φ 2r − 0 φ 4r + δ Z (∂ µ φr ) 2 − δ m φ 2r − λ φ 4r . 2 2 4! 2 2 4! The last three terms are known as the counter terms and they take into account the infinite and unobservable shifts between the bare parameters and the physical parameters. At the lowest order in perturbation theory the relationship between the source and the classical field is
δL δφcl
(3.39)
+ J ( x) = 0. φ=φcl
Because the functional Z [ J ] depends on φcl through its dependence on J and our goal is to compute Γ as a function of φcl this will be the starting point.
Next, we define J1 to be the function that exactly satisfies the classical field equation above for higher orders, i.e. when L = L1
δL1 δφcl
(3.40)
and the difference between both sources J and
+ J1 ( x) = 0, φ=φcl
J1 will be written as (see Peskin and Schroeder 11.4)
J ( x) = J1( x) + δJ ( x),
(3.41)
where δJ has to be determined, order by order in perturbation theory by use of (3.34), that is by using the equation
(3.42)
< φ( x) >J = φcl ( x). We may now write (3.32) as
(
) (
)
e − iE [ J ] = ∫ D φ exp i ∫ d 4 x ( L1[ φ ] + J 1φ ) exp i ∫ d 4 x ( δL[ φ ] + δJ φ ) ,
where all the counter terms are in the second exponential. Let us concentrate on the first exponential first. Expanding the exponential about φ( x) = φcl (x) + η(x) yields
δL1 1 δ 2 L1 4 4 4 4 4 d x ( L [ ] J ) d x ( L [ ] J ) d x ( x ) J d xd y ( x ) ( y ) φ + φ = φ + φ + η + + η η 1 1 1 cl 1 cl 1 ∫ ∫ ∫ ∫ δφ 2! δφ( x)δφ( y )
(3.43)
+
1 δ3 L1 4 4 4 d xd yd z x y z ( ) ( ) ( ) η η η +⋯ ∫ 3! δφ( x )δφ( y )δφ( z )
and it is understood that the functional derivatives of L1 are evaluated at φcl ( x).
The second integral on the RHS vanishes by (3.43) and therefore the integral over η is a Gaussian integral, where the perturbative corrections are given by the cubic and higher terms. Let us assume that the coefficients of (3.43) (i.e. the successive functional derivatives of L1 ) give well-defined operators. If we keep only terms up to quadratic order in η and we only focus of the first integral of (3.42) we find that there is a pure Gaussian integral which can be evaluated in terms of a functional determinant as we have computed in the Appendix
(3.44)
4 1 4 4 δ 2 L1 D exp i d x ( L [ ] J d xd y η φ + φ + η η ( ) 1 cl 1 cl ∫ ∫ 2 δφ( x )δφ( y ) δ 2 L1 = exp i ∫ d 4 x( L1[φcl ] + J1φcl ) det − δφ( x )δφ( y )
−1 / 2
,
the lowest-order quantum correction to the effective action is given by the determinant. If we now consider the second integral of (3.42) which consists of the counter terms of the Lagrangian and expanding as we have done before we have
(3.45)
(δL[φcl ] + δJ φcl ) + (δL[φcl + η] − δL[φcl ] + δJ η).
The cubic and higher order terms in η in (3.43) produce Feynman diagram expansion of the functional integral in (3.42) in which the propagator is the operator invest −1
(3.46)
δ 2 L1 −i , δφ( x )δφ( y )
and hence when the second term in (3.45) is expanded as a Taylor series in η the successive terms give counter term vertices which can be included in the above mentioned Feynman diagrams. The first term is a constant with respect to the integral over η thus it gives additional terms in the exponent of (3.44).
Taking (3.44) with the contributions from higher order vertices and counter terms together we obtain an expression for the functional integral (3.42). Feynman diagrams representing the higher order terms can be arranged in such a way that they yield the exponential of the sum of the connected diagrams, obtaining the expression for E [ J ]
(3.47)
1 δ 2 L1 −iE[ J ] = i ∫ d 4 x( L1[φcl ] + J1φcl ) − log det − + (connected diagrams) 2 δφ( x )δφ( y )
+i ∫ d 4 x(δL[φcl ] + δJ φcl ), and finally by virtue of (3.41) and (3.35) we finally have
∫
(3.48) Γ[φcl ] = d 4 xL1[φcl ] +
i δ 2 L1 4 log det − − i (connected diagrams) + ∫ d x(δL[φcl ]). 2 δφ( x)δφ( y )
This indeed the expression we were seeking since Γ is a function of φcl , taking away the J dependence. The Feynman diagrams in the expression for Γ have no external lines and they all contain at least two loops. The last term of (3.48) gives the counter terms that are needed for the renormalization conditions on Γ and cancel the divergences that appear in the evaluation of the determinant and the diagrams. We shall ignore any one-particle irreducible one-point diagram (these diagrams are cancelled by the adjustment of δJ ). We now go back to the Euclidean space definition of the generating functional
(3.49)
1 1 Z E [ J ] = E ∫ Dφ exp − ∫ d 4 x ∂µφ∂ µφ + m2φ2 + V (φ) − J φ . 2 2
This will be evaluated by expanding the action. In order to do this, set φ0 to be a field configuration, then
(3.50)
1 1 S Euc [φ, J ] := ∫ d 4 x ∂ µ φ∂ µ φ + m 2 φ 2 + V (φ) − J φ 2 2
δS SEuc [φ, J ] = SEuc [φ0 , J ] + ∫ d 4 x Euc (φ − φ0 ) δφ
+
1 4 4 δ 2 S Euc d x1d x2 (φ( x1 ) − φ0 ( x1 ))(φ( x2 ) − φ0 ( x2 )) + ⋯ ∫ 2 δφ( x1 )δφ( x2 )
It is understood that the functional derivatives are evaluated at φ0. We know from AQFT that the classical limit can be recovered, by taking SEuc to be stationary at
φ0 and this implies that φ0 satisfies
the classical EOM with the source term
(3.51)
δSEuc δφ
= −∂µ ∂ µ φ( x0 ) + m 2φ( x0 ) + V '[φ( x0 )] − J = 0. x0
Integrating by parts gives
(3.52)
SEuc [φ( x0 ), J ] =
1 4 d d x 2 − φ( x0 ) [ − J φ( x0 ) + V '[φ( x0 )]], ∫ 2 d φ( x0 )
whereas the second derivative is an operator
(3.53)
δ2 SEuc = δ( x1 − x2 ) −∂µ ∂ µ + m2 + V ''[φ( x1 )] . δφ( x1 )δφ( x2 )
We need to make regression now on how to evaluate integrals of the sort
I := ∫ dx exp( −α ( x ))
(3.54)
This can be evaluated by expanding the exponential around a point x0 where α is stationary, i.e.
(3.55)
1 α( x) = α( x0 ) + ( x − x0 )2 '' α ''( x0 ) + O( x3 ). 2
The I integral is then approximated
1 I = exp( −α ( x0 )) ∫ dx exp − ( x − x0 ) 2 α ''( x0 ) , 2
(3.56)
and we can recognize this as a Gaussian integral (when the higher derivatives are ignored)! The degree of this approximation depends obviously on
α.
When
α
is smallest then the integrand is
largest and the points away from the minimum do not add a substantial contribution. Equipped with this method we can apply it to the functional (see Ramond 3.4) we have just arrived at a crucially important result
(3.57)
1 δ 2 S Euc Z E [ J ] = E exp {S Euc [φ( x0 ), J ]} ∫ Dφ exp − ∫ d 4 x1d 4 x2 φ( x1 ) φ( x2 ) δφ( x1 )δφ( x2 ) 2
= 'E
exp{SEuc [φ( x0 ), J ]} det (−∂µ ∂ µ + m2 + V ''[φ( x0 )])δ( x1 − x2 )
,
where we have ignored the higher order terms. Let us do some re-write to make this expression easier to handle, first the determinant, which we will call M can be taken care of by using the identity (3.58)
(3.59)
det M = exp(tr log M ),
1 Z E [ J ] = 'E exp S Euc [φ( x0 ), J ] − tr log( −∂ µ ∂ µ + m 2 + V ''[φ( x0 )])δ( x1 − x2 ) , 2
the delta term accounts for the quantum perturbations (or corrections) to Z [ J ], whereas the first term accounts for the classical contribution. Also note that we set by convention the determinant of an operator to be the product of its eigenvalues and that because φ0 satisfies δS Euc / δφ
x0
= 0 then it is a
functional of J.
The concluding remark is to find the equivalence of these results in terms of the classical field
φcl and
explore the corresponding effective action. When we work in Euclidean space, the classical field was defined as
φ cl ( x ) = −
(3.60)
δZ E δS E ≈− + O ( ℏ ), δJ ( x ) δJ ( x )
and by using (3.57) and (3.58) we can have φcl as a function (functional, rather) of J, however at the cost of doing it order by order in λ . The term O (ℏ) stands for quantum corrections. This relationship can be inverted and we can find J ( x ) as a function of φcl . This inversion can be carried out to give
J ( x ) = (∂ 2 − m2 )φcl ( x ) −
(3.61)
λ 3 φcl ( x ). 3!
An attractive (and indispensable) feature is that there are no higher terms in λ , comparison with
0 = δS Euc / δφ x = −∂ µ ∂ µ φ( x0 ) + m 2 φ( x0 ) + V '[φ( x0 )] − J gives φ cl ( x ) = φ 0 ( x ) + O ( ℏ ). 0
By integrating
J ( x) = −δΓ[φcl ]/ δφcl we see that to this order the effective action is
(3.62)
λ 1 Γ eff [φ cl ] = − ∫ d 4 x φ cl ( ∂ 2 − m 2 )φ cl − φ cl4 ( x ) , 4! 2
which is a classical action.
REFERECES The above summary on quantum field theory has its roots several sources, overall it follows the presentation given by Peskin and Schroeder [275 to 294, 306 to 308 and 365 to 372] as well as Ramond and section 4.4 of Bailin and Love. At times it is complemented by the courses of QED and AQFT from the MSc QFFF. However, it is broad enough to be explained in most QFT books.
Zeta Regularization in Field Theory At this point we can put together some of the concepts acquired in the second and third chapters. Firstly we will further develop the theory of quantum corrections by evaluating the determinant of the M matrix and then see how this is related to the use of the zeta function in quantum theory as we explained in Chapter 2. In what follows, we will extrapolate the results found in the quantum mechanical section to field theory, this section borrows some its contents from 3.5and 3.6 from Ramond as well as Harfield (section 22.2). As we mentioned earlier, the determinant in the expression (3.57)
(4.1)
ZE[J ] =
E' exp{S Euc [φ( x0 ), J ]} det ( −∂ µ ∂ µ + m 2 + V ' '[φ( x0 )])δ( x1 − x2 )
[
]
must be interpreted as the product of the eigenvalues of the operator. In order to discretize these eigenvalues we truncate the space (by use of a box). We then multiply the resulting eigenvalues and then let the size of the box increase to infinity. First we state some preliminary results that will become useful later on. As it is shown in a course on partial differential equations, the heat function G ( x , y , t ) := ∑ n exp( − an t ) f n ( x ) f n* ( y ) satisfies the heat equation (see Hartfield) Ax G( x , y , t ) = − ∂∂t G( x , y , t ). Also, similar to our expression (1.15) we have the following result ∞
(4.2)
ζ A ( s )Γ( s ) = ∫ dtt s −1 ∫ d 4 xG ( x , x , t ), 0
this in fact follows from the orthogonality of eigenfunctions. By this same orthogonality we have (4.3)
G ( x , y , t = 0) = δ ( x − y ).
This is the analytic representation of the zeta function we are seeking. A solution of the heat equation
−∂ x2G0 ( x , y , t ) = −
(4.4)
∂ G0 , ∂t
with the boundary condition G0 ( x , y , t = 0) = δ( x − y ) is
G0 ( x , y , t ) =
(4.5)
1 1 exp − ( x − y ) 2 , 2 2 16 π r 4r
also a classic result from partial differential equations. Let us generalize some of the techniques we used in Chapter 2. Consider an operator A with positive real discrete eigenvalues a i where i runs from 1 to n and its eigenfunctions are f n ( x), i.e.
Afi ( x) = ai fi ( x). From here we set ζ A ( s ) = ∑ an− s ,
(4.6)
n
which we call the zeta function associated to the operator A. The sum is over all the eigenvalues and A is a real variable. If the operator A were the one-dimensional harmonic oscillator Hamiltonian, then ζ A would be the Riemann zeta function (excluding the singular zero-point energy). The first observation is that
(4.7)
d ζ A ( s) = − ∑ log ans exp(−s log ans ) = − log ∏ an , ds n n s =0 s =0
which gives the determinant
(4.8)
det A := ∏ an = exp(−ζ 'A (0)). n
The zeta-function ζ A is not always singular at s = 0 for physically interesting operators and hence the convenience of writing this representation for det A.
Algorithmically we have a procedure to compute the determinant of A and it can be done as follows. First we need to find the solution to the heat equation subject to the delta-function initial condition (4.3). Second, once this is done we can insert the solution into the zeta representation above and we have ζ A (s). Finally evaluate at s = 0 and compute exp(−ζ 'A (0)). In our case, our operator is
A = −∂ 2 + m 2 +
(4.9)
λ 2 φ0 ( x ), 2
where φ0 ( x ) is a solution of the classical equations with source J as we discussed in Chapter 3. As we have pointed out a solution of the heat equation with the boundary condition
G0 ( x , y , t = 0) = δ( x − y ) is
(4.10)
G0 ( x , y , t ) =
1 1 exp − ( x − y ) 2 . 2 2 16π r 4r
However, this is only a part of the operator as we want to find G0 ( x , y , t ) subject to the initial condition (4.3) which obeys the whole A operator
(4.11)
λ 2 ∂ 2 2 −∂ + m + 2 φ 0 ( x ) G ( x , y , t ) = − ∂t G ( x , y , t ).
For arbitrary fields φ0 we need to expand the effective action as Γ E [φcl ] = Γ E [φcl ] + ℏΓ E [φcl ] + ⋯ (0)
(1)
(the ℏ indicates quantum terms) which allows us to write
(4.12)
1 Γ (1) E [ φcl ] = − ζ '[ −∂ 2 + m 2 + λ φ2 ( x )] (0), 2 0 2
by use of (4.1) and (4.8) combined with (4.9). Note that replacing φ0 by φcl is not a problem as there are no new quantum errors, that is errors up to O ( ℏ ). Also the effective action can be written as
(4.13)
Γ E [φ cl ] = ∫ d 4 x V ( φ cl ( x )) + F (φ cl ) ∂ µ φ cl ( x ) ∂ µ φ cl ( x ) + ⋯
To compute the quantum O ( ℏ ) contribution to V (φcl ( x )) we need to consider a constant field configuration: i.e. suppose we take φcl ( x ) = v where v is a constant independent of x . Then
(4.14)
Γ E [φ cl ] = ∫ d 4 xV ( v ),
and it is proportional to the infinite volume element
∫d
4
x , since the Euclidean space R4 is
unbounded. This can be temporarily solved by taking the space to be the sphere S4 then the volume is just that of the 5-dimensional sphere and hence finite. While the radius is finite we need not worry about the infrared divergence. We then let the radius of the sphere tend to infinity. Taking V out of the integral we have [see Hartfield]
(4.15)
1 V (v)∫ d 4 x = − ζ '[ −∂2 + m2 + λ ] (0). 2 2
We can proceed to integrate (4.11) when v is constant this yields
(4.16)
µ2 ( x − y )2 2 λ 2 t µ4 G( x , y , t ) = exp − exp −m + v 2 , 2 2 16π t 4t 2 µ
where the µ factor needs to be explained: it has dimensions of mass so that t is dimensionless. Using (4.2) we obtain ∞ 2 λ 2 t µ4 µ 4 m 2 + λ2 v 2 1 4 s −1 ζ(s) = − + = dtt d x exp m v ∫ 16π2t 2 Γ( s ) ∫0 2 µ 2 16π 2t 2 µ 2
(4.17) 2−s
Γ( s − 2) 4 d x, Γ( s ) ∫
the volume element ∫ d 4 x is present because it is in (4.15) and here t has been rescaled since the integration over t is valid when s > 2 however the zeta function is defined everywhere by analytic continuation as we know from Chapter 1. Comparing these two equations we have
(4.18)
2−s 2 m2 + λ2 v 2 m 2 + λ2 v 2 3 µ 4 d 1 1 2 λ 2 log = + − . V (v ) = − m v 32π 2 ds ( s − 2)( s − 1) µ 2 64π2 2 2 µ2 s=0
Equipped with this functional form of V the effective potential can be written as 2
(4.19)
m 2 + λ2 v 2 3 1 2 2 ℏ 2 λ 2 λ 4 V (φcl ) = m φcl ( x ) + φcl ( x ) + − . m + φcl ( x ) log 2 4! 64π2 2 2 µ2
We make a pause now to examine this result. Superficially the first striking observation is that there is a strong dependence on the unknown and arbitrary scale µ . This would seem to imply that the 2
potential is therefore arbitrary. However V depends on the parameters m 2 and λ which are undefined, except for the fact that they are included in the classical Lagrangian. Let us take the special massless case. This yields the following
(4.20)
d 2V d φ2
= 0. φ= 0
Now we define the mass squared as the coefficient of the φ terms in the Lagrangian evaluated at 2
φ = 0 . To first quantum corrections the coefficient is zero, if it is classically zero. The λ term is
defined to be the coefficient of the fourth derivative of V evaluated at some constant point φ = M , i.e.
(4.21)
d 4V λ := d φ4
. φ= M
We cannot take φ = 0 as with the mass squared factor because of the infrared divergence coming from the logarithm. When we differentiate (4.19), set m 2 = 0 and use (4.21) we see that the above condition requires
(4.22)
log
λM 2 8 =− . 2 2µ 3
In this case, we may use M 2 instead of 2µ / λ and write the result as 2
λ 4 λ 2φcl4 φcl2 25 V (φcl ) = φcl ( x ) + log 2 − . M 4! 256π2 6
(4.23)
This was proved by Coleman and Weinberg in 1973. The main result that can be extracted is that we need to be careful with how we define the input parameters in the Lagrangian if we are to take into account quantum corrections. Again, superficially it seems that (4.23) depends on another arbitrary 2
scale M , but in fact it does not. Given the normalization condition, if we change the scale from M 2 to M '2 we simultaneously have to change at the same time λ to λ ', by use of (4.21)
3λ 2 M' log . λ' = λ + 2 16π M
(4.24)
Therefore the potential (see Ramond)
(4.25)
V (φcl ) =
λ' 4 λ '2 φcl4 φcl2 25 φcl ( x ) + − log 4! 256π 2 M '2 6
is indeed invariant under the representation V ( λ ', M ') = V ( λ , M ). This proves that the physics behind remains unchanged but our way of interpreting the coefficients changes. Let us now look more closely to the scaling of determinants and the coupling constants. The zeta function technique just used allows us derive scaling properties for determinants. First, we will need the computation of zeta function ζ[ −∂ 2 + λ φ2 ] (0). This can be accomplished by 2 cl
taking the asymptotic expansion of G ( x , y , t ) at µ = 1, 2
2
(4.26)
G( x , y , t ) = e
−εt
e − ( x − y ) /(4t ) ∞ an ( x , y )t n , ∑ 2 2 16π t n = 0
with ε > 0 as a convergence factor. The boundary condition (4.3) sets the condition
a0 ( x , x ) = 1.
(4.27)
Additionally, when we insert (4.26) into the PDE (4.11) we find recursion relations for the an coefficients
( x − y )µ
(4.28)
∂ a0 ( x , y ) = 0 ∂xµ
and for n = 0,1, 2, ⋯
(4.29)
∂ 2 λ 2 ( n + 1) + ( x − y )µ an +1 ( x , y ) = ∂ x − φcl + ε an ( x , y ). 2 ∂xµ
When we compute the first terms we have
λ a1 ( x , x ) = − φ2cl + ε 2
(4.30)
a2 ( x , x ) =
λ2 4 λ ε ε2 φcl ( x ) − ∂ x2φcl ( x ) − λφcl2 ( x ) + . 8 4 2 2
Let us now use these results. We work under a scale change A → A ' = e A, where d is the natural ad
dimension of A. By the definition of the zeta function we have
(4.31)
ζ A ' ( s) = e− ads ζ A ( s)
which implies that
(4.32)
det(ead A) = ead ζ A (0) det( A).
Let us illustrate this with an example. Under the transformation
(4.33)
the massless classical action
xµ → xµ ' = e a xµ
φcl → φcl ' = e − a φcl
λ 1 S E [φ cl ] = − ∫ d 4 x φ cl ∂ 2 φ cl − φ cl4 4! 2
(4.34)
is unchanged. However, the path integral corresponding to this action is not scale invariant since in the steepest descent approximation the change in the effective action to quantum order as follows (4.35)
S Eeff [ φ cl ] → S Eeff '[ φ cl ] = S Eeff [ φ cl ] − ℏ a ζ [ −∂ 2 + λ φ 2 ] (0). 2
cl
From what we have shown on the last zeta function we can put these two coefficients into (4.26), then in turn putting (4.26) into (4.2) and then integrate out the ∂ 2 term we have
ζ (0) =
(4.36)
2 1 4 λ φcl4 ( x ), d x 16π 2 ∫ 8
and in terms of the effective action (4.35)
S
(4.37)
eff E
λ2 '[φcl ] = S [φcl ] − ℏa d 4 x φcl4 ( x ). 2 ∫ 8 ⋅ 16π eff E
Consequently the effect of the transformation to quantum order has been to change the coupling constant λ by the following
(4.38)
λ λ' λ λ2 3λ 2 ' ℏa. → = − ℏa ⇔ λ = λ − 4! 4! 4! 8 ⋅ 16π 2 16π 2
What this is means is that the dimensionless coupling constant λ evolves as a result of quantum effects. This evolution is in term of scale dependence. At large scales the coupling constant decreases indicating that the non-interaction theory is a good approximation for the asymptotic states. On the other hand, if the scale decreases, the coupling increases. Independently of how small λ was at the beginning this increment might throw away the results obtained in the perturbation of λ . Moreover, this scaling law is like the one we found earlier, and they are both correct to quantum orders. We recall that for any given time t a quantum mechanical system with one degree of freedom, q and canonically conjugate momentum p, is described in terms of the spectrum of its Hamiltonian H ( p , q ).
From the path integral formulation we know that if the system at an initial time ti is measured to be in the state | q >, then the probability that the system will be found in the state | q > at a final time t f i
f
is exactly
< qtff | qtii > = < q f | e
(4.39)
− i ( t f − ti ) H
| q i >,
and it can be written in terms of path integrals as
(4.40)
− i ( t f − ti ) H
tf | q > = ∫ Dq ∫ Dp exp i ∫ dt[ pqɺ − H ( p, q)], ti i
i
f
the factor D q denotes integration between the initial and final configurations q and q ; the dot over q denotes the derivative of q with respect to time. Quantum field theory and statistical mechanics share certain common elements and precisely this analogy will allow us to apply path integrals to the description of dynamics systems at finite temperature (see Peskin and Schroeder, and Kleinert). The first step, as we did in Chapter 2, is to compute the partition function
(4.41)
Z = Tr[e−βH ],
where the constant is
(4.42)
β = (kT )−1,
taking into account that the trace is taken to be the sum over all the possible configurations the system is allowed to take. Note that the time is singled out. Now, the probability for the system to be in state of energy E is identified with (4.43)
P = Z − 1e − β H .
The value of any function for the dynamical variable f ( q , p ) is given by
< f > = Tr( fP) = Z −1 Tr( fe−βH ).
(4.44)
This does show a special similarity with zero temperature quantum mechanics and QFT but the degree of this similarity is not fully understood. However, we can push the analogy to calculate partition functions, specially this one. Let us start with a system which can be regarded as a field theory in zero space dimensions. −1
We can compare this expression to the partition function for the same system at temperature β
(4.45)
Z = Tr[ e −β H ] =
∑< q |e
− βH
| q >.
q
Let us draw comparisons between (4.40) and (4.45). If we set i(t f − ti ) = β or alternatively set ti = 0 and then it f = β, since the origin of time is arbitrary. Next, set q = q which means that the initial f
i
and final configurations are the same, and since the difference is a β factor, the only requirement is that the relevant configuration is periodic in the functional integrals
q (β) = q (0).
(4.46)
Thus the functional integration D q is over the space of periodic functions. In this case, the sum over q in (4.40) is implicit. When we do the comparison we can write
(4.47)
Z = Tr(e
−β H
β dq ) = ∫ Dq ∫ Dp exp ∫ d τ ip − H , 0 d τ
bearing in mind again that D q is over periodic functions. If we take a well-behaved potential V ( q ) we could scale the temperature dependence purely into the q integral. To do this, we make the following transformations
(4.48)
τ = τβ−1, p = pβ1/ 2 , q = qβ−1/ 2 ,
then the exponent of the integrand becomes
dq p 2 1/ 2 d τ ∫0 ip d τ − 2 − βV (β q ). 1
(4.49)
Furthermore we can drop all the bars because the path integral measure is invariant under the changes (4.48) thus we can write the partition function as
(4.50)
1 1 Z = ∫ Dq ∫ Dp exp ∫ d τ ipqɺ − p 2 − βV (qβ1/ 2 ) . 2 0
Next, set p ' = p − iqɺ , so that the measures are equal Dp ' = Dp,
(4.51)
which in turn, by competing the square in the exponent, allows us to write
(4.52)
11 1 1 Z = ∫ Dp 'exp − ∫ d τp '2 ∫ Dq exp −∫ d τ qɺ 2 + βV (qβ1/ 2 ) . 20 0 2
As we have done repeatedly in AQFT we can ignore the p ' integral (even though it is infinite) because it is independent of β . We call this integral , and the reason why it can be ignored is because is usually always present in the numerator and denominators of correlation functions. The only example we could tackle is that of an integral that can be evaluated, i.e. a Gaussian integral. The integral becomes Gaussian when we take potential V (q) = 12 ω q , the partition function is 2
(4.53)
2
1 1 1 Z = ∫ Dq exp −∫ d τ qɺ 2 + β2ω2 q 2 . 2 0 2
It is one of the very few types that can actually be integrated. By virtue of (4.46) we have 1
(4.54)
2
1
1 1 d2 dq d τ = − ∫ d τq 2 q, 2 ∫0 d τ 20 dτ
since the extra surface term is eliminated and therefore
11 d2 Z = ∫ Dq exp − ∫ d τq − 2 + ω2β2 q . dτ 20
(4.55)
If we proceed by analogy with the discrete case we have
11 d2 ' 2 2 Dq d q q exp , − τ − + ω β = 2 ∫ ∫ det A dτ 20
(4.56)
where ' is a constant, and A is the operator
A=−
(4.57)
d2 + ω2β 2 2 dτ
with positive definite eigenvalues (it must not contain zero eigenvalues as these would create infinities which have to be removed). In order to prove (4.56) we need to express q ( τ ) in terms of its Fourier components (QFT course) then transform into the normal modes of A and integrate each one using ∞
∫ dq
(4.58)
n
0
2π 1 exp − an qn2 = . an 2
The operator A operates on periodic functions with unit period, which can all be expanded in terms of 2πin
the complete Fourier set {e
}. The eigenvalues of A are just (4π2 n2 + ω2β2 ); n ∈ℤ.
(4.59)
Hence multiplying we have the determinant of the operator A,
det A = ∏ (4π2 n 2 + ω2β2 ).
(4.60)
n∈ℤ
Setting x = ω β yields the following 2
(4.61)
2 2
d 1 1 1 log det A = ∑ 2 2 = 2 + 2∑ 2 2 . 2 2 2 dx x n∈ℤ 4 π n + x n ≥1 4 π n + x
Substituting the formula that was shown in the Appendix
coth πx =
(4.62)
1 2x 1 + ∑ 2 πx π n ≥1 x + n 2
d 1 x logdet A = coth 2 dx 2x 2 and from here we can integrate to find
log
(4.63)
det A x x ωβ = ∫ dx coth = 2logsinh = 2logsinh . C 2 2 2
Removing the logs we have the formula for the determinant
det A = C sinh 2
(4.64)
ωβ . 2
When we clean the expression we arrive to
1 1 D 1 F = − log Z = − + ω + log(1 − e −ωβ ), β β 2 β
(4.65)
where D is another constant, the zero-point energy is identified at 1 = e − ωβ . This formula is called the thermodynamic potential. Let us go back to our discussion of the Riemann zeta function. The heat equation associated with the A operator (4.57) is (see Ramond, Elizalde, Odintsov, Romeo, Bytsenk and Zerbini)
G (t , t ', σ ) =
(4.66)
∑ exp {2 πin (t − t ') − ( ω β 2
n∈ℤ
2
+ 4 π 2 n 2 ) σ},
and recalling our expression (4.2) for the zeta-function ∞
ζ A[ s ] =
(4.67)
1
1 d σσs −1 ∫ dt ∑ exp{−(ω2β2 + 4π2n 2 )σ}. ∫ Γ( s ) 0 n∈ℤ 0
Scaling σ by ω β + 4π n leads nowhere as we simply come back to the expression 2 2
2 2
1 . 2 2 s n∈ ℤ ( ω β + 4 π n )
ζ A[s] = ∑
(4.68)
2
2
The technique we need to use is somewhat messier, it involves expanding in powers of ω β , then integrating and re-arranging the sums. With this in mind we have
∑e
(4.69)
−4 π 2 n 2 σ
n∈ℤ
= 1 + 2 ∑ e −4 π n σ , 2 2
n ≥1
and integrating we have ∞
(4.70)
ζ A[ s] = (ωβ) −2 s +
∞ 2 2 2 ∞ (ωβ)2 k −k − ( 1) d σσs + k −1e−4π n σ . ∑ ∑ ∫ Γ( s ) k = 0 k ! n =1 0
Now it is the time when we can rescale σ by 4 π 2 n 2 and we can also identify the sum over n with the ∞
Riemann zeta-function ζ (2 s ) = ∑ n =1 n −2 s , which gives
(4.71)
ζ A[ s] = (ωβ)
−2 s
2 2 ∞ (ωβ)2 k (−1)− k + ζ(2s) + Γ( s + k )ζ (2 s + 2k ). ∑ (4π2 ) Γ( s) k =1 k ! (4π2 ) s + k
In order to differentiate ζ A at s = 0 we note that the sum is well behaved at s = 0 and as s → 0 a non −1
zero term arises from the derivative of Γ (s). Recalling our formulas from Chapter 1,
1 1 ζ(0) = − , ζ '(0) = − log 2π, 2 2
(4.72)
(ωβ)2 k (−1) k ζ(2k ) ζ A '[0] = −2log(ωβ) − 2log 2π + 2log 2π + 2∑ . k (4π2 )k k =1 ∞
(4.73)
Recalling the formula for the even values of the Riemann zeta function in terms of Bernoulli numbers
(4.74)
we can continue simplifying (4.73)
ζ (2k ) =
(−1) k +1 (2π) 2 k B2 k , 2(2k )!
(ωβ)2 k 1 B2 k . k k k =1 (2k ) ∞
ζ A '[0] = −2log(ωβ) − ∑
(4.75)
Using another formula from the appendix
coth x =
(4.76)
∞ 1 (2 x) 2 k −1 + 2∑ B2 k , k x k =1 (2k )
and integrating yields
∫ dx coth x = log x +
(4.77)
1 ∞ (2 x) 2 k B2 k , ∑ 2 k =1 k (2k ) k
and finally by comparing with (4.76) and setting x = 12 ωβ, we obtain
(4.78)
ζ A '[0] = −2log(ωβ) + 2log
ωβ ωβ − 2logsinh = −ωβ − 2log(1 − e−ωβ ). 2 2
Solving for Z yields
(4.79)
1 1 log Z = ζ A '[0] = − ωβ − log(1 − e−ωβ ), 2 2
which is what we found earlier on the quantum mechanical case. Although this technique is more cumbersome than the previous one it has enabled us to show the connection between the zeta function and quantum field theory. It is desirable to be able to obtain (4.19) using another regularization and see how these two compare. In the Unification course, it was shown that spontaneous symmetry breaking requires the development of a vacuum expectation value (VEV) from the scalar field. Furthermore, as we have shown in this Chapters 3 and 4, the VEV is determined by the minimisation of the effective potential,
(4.80)
dV = 0. d ϕcl
The effective potential is given exclusively by the potential V of the Lagrangian if no quantum effects are taken into account. Perturbation theory allows us to place the quantum terms, however this would clash with the non-perturbative nature of spontaneous symmetry breaking. The alternative parameter is the loop expansion which we now describe. Let us re-write the theory around (3.24) as follows. With a generating functional X of the connected Green functions in a scalar field theory,
(4.81)
(
Z [ J ] = exp(iℏ −1 X [ J ]) = ' ∫ Dϕ exp iℏ −1 ∫ d 4 x( L + J ϕ)
)
with normalization constant N’ chosen so that (as we pointed out in Chapter 3)
Z [0] = 1
(4.82)
X [0] = 0.
As we have also pointed out, the one-particle-irreducible Green functions Γ ( n ) are generated by the effective action Γ[ϕcl ] by the use of ∞
(4.83)
in 4 d x1 ⋯ ∫ d 4 xn Γ( n ) ( x1 ,⋯, xn )ϕcl ( x1 )⋯ϕcl ( xn ). ∫ n =1 n !
Γ[ϕcl ] = ∑
In (4.81) the factor ℏ −1 multiplies the whole Lagrangian (not just the interaction part) each of the V vertices in any diagram will carry a ℏ −1 factor and each of the I internal lines will carry a ℏ factor. Each of the E external lines in Green functions Gɶ
(E)
has a propagator. With this information, there overall
factor is
(4.84)
ℏ − V + I + E = ℏ L + 1− E ,
by virtue of L = I − V + 1. Furthermore, there is a factor of ℏ L − E in any diagram in expansion of
ℏ −1 X . The one-particle-irreducible Green functions Γɶ ( E ) have no propagators, hence the multiplying factor is only ℏ L . This means that the power of the ℏ indicates the number of loops. When there are no loops ( L = 0) the only non-vanishing Γɶ ( E ) are
Γɶ (2) ( p, − p) = p 2 − µ 2
(4.85)
and
Γɶ (4) ( p1 , p2 , p3 , p4 ) = −λ,
(4.86)
which gives the approximation to the potential
1 1 V0 (ϕcl ) = µ 2ϕcl2 + λϕcl4 , 2 4!
(4.87)
as classically expected without quantum corrections. Note, however, that the role of ℏ is not central to the discussion as no assumptions about its size have been made, it is only an expansion parameter. Bailin and Love [201] give same effective action Γ[ϕcl ] with
(4.88)
1 1 Γ[ϕcl ] = −i log − λ ∫ dx[i∆ F (0)]2 − ∫ dxϕcl ( x )(∂ µ∂ µ + µ 2 )ϕcl ( x ) 8 2 1 1 − λi∆ F (0) ∫ dxϕcl2 ( x) − λ ∫ dxϕcl4 ( x ) + O (λ 2 ) 4 24
The terms that contain a Feynman propagator ∆ F (0) come from divergent loop integrals and hence they do not contribute in zeroth order, consequently we may write
(4.89)
Γ 0 [ϕcl ] = −
1 1 dxϕcl ( x )(∂ µ ∂ µ + µ 2 )ϕcl ( x ) − λ ∫ dxϕ4cl ( x ) ∫ 2 4!
Again taking into account that = 1 implying Γ[0] = 0. The first order accuracy of ℏ , the loop expansion that is, of the effective potential V and effective action Γ can be computed by writing (4.90)
ϕ( x ) = ϕ0 ( x) + ϕɶ ( x),
With ϕ0 being the zeroth order approximation to ϕcl . Hence this shift in the functional integration variable ϕ must satisfy the following EOM
(∂ µ ∂ µ + µ 2 ) ϕ 0 ( x ) +
(4.91)
λ 3 ϕ0 ( x ) = J ( x). 6
When we plug this change into the Lagrangian density
1 1 1 L (ϕ) = (∂ µϕ)(∂ µ ϕ) − µ 2 ϕ2 − λϕ4 2 2 4!
(4.92)
we have the following integral
(4.93)
∫d
4
x( L + J ϕ) = ∫ d 4 x( L(ϕ0 ( x)) + J ϕ0 )
1 1 1 ɶ 0 − λϕϕ ɶ 30 + J ϕɶ + ∫ d 4 x L2 (ϕɶ , ϕ0 ) − λϕɶ 3ϕ0 − λϕɶ 4 + ∫ d 4 x (∂ µ ϕɶ )(∂ µ ϕ0 ) − µ 2ϕϕ 6 6 4 with L2 accounting for all the quadratic leftovers in ϕɶ when the shift is performed, i.e.
(4.94)
L2 =
1 µ 1 1 (∂ ϕɶ )(∂ µ ϕɶ ) − µ 2ϕɶ 2 − λϕ2ϕɶ 2 . 2 2 4
Now, since ϕɶ minimises the classical action then the linear team in ϕɶ of (4.93) disappears. Our next step is to re-scale the integration variable as follows
ϕɶ = ℏ1/ 2ϕ
(4.95)
And plug this back into (4.81) yielding
(4.96)
Z [ J ] = 'exp iℏ −1 ∫ d 4 x[ L(ϕ0 ) + J ϕ0 ] λ 1 ×∫ Dϕ exp i ∫ d 4 x L2 (ϕ, ϕ0 ) − ℏ1/ 2ϕ3ϕ0 − λ ℏϕ4 6 24
We can recover a Gaussian integral out of this by noting that we are only interested in the first-order corrections, hence terms proportional to ℏ1 / 2 and ℏ may be discarded. This procedure gives
(4.97)
∫d
4
xL2 (ϕ, ϕ0 ) = −
1 4 4 d xd x ' ϕ( x ') A( x ', x, ϕ0 )ϕ( x ) 2∫
with, as expected from our previous discussions
(4.98)
1 A( x ', x, ϕ0 ) = −∂ x ' µ ∂ µx + µ 2 + λϕ02 δ( x '− x ), 2
finally obtaining for the value of Z
(4.99)
1 Z [ J ] ≈ 'exp iℏ −1 ∫ d 4 x[ L (ϕ0 ) + J ϕ0 ]exp − tr log A( x ', x, ϕ0 ) . 2
By our conditions (4.82) and ϕ0 [0] = 0 we then obtain
(4.100)
i Z [0] = 1 ≈ 'exp − tr log A( x ', x,0) . 2
Comparison with (4.81) shows that by keeping the same definition of A we then have
(4.101)
X 0 [ J ] = ∫ d 4 x[ L(ϕ0 ) + J ϕ0 ]
(4.102)
X 1[ J ] =
i A( x ', x, ϕ0 ) tr log . 2 A( x ', x,0)
The effective action is computed by expanding
(4.102)
Γ[ϕcl ] = Γ0 [ϕcl ] + ℏΓ1[ϕcl ] + O(ℏ 2 ).
Now, since ϕcl is a functional of J, then
(4.103)
ϕcl ( x ) =
δX [ J ] , δJ ( x )
we have the following expansion
∫
∫
∫
1 2
(4.104) Γ 0 [ϕcl ] = X 0 [ J ] − d 4 xJ ϕ0 = d 4 xL (ϕ0 ) = d 4 x (∂ µ ϕcl )(∂ µ ϕcl ) −
1 2 2 1 4 µ ϕcl − ϕcl , 2 4!
which is the same as (4.89). The additional term in the expanded effective action is
(4.105)
ℏΓ1[ϕcl ] = X 0 [ J ] − Γ 0 [ϕcl ] − ∫ d 4 xJ ϕcl + ℏX 1[ J ] = ∫ d 4 x[ L (ϕ0 ) + J ϕ0 ] − ∫ d 4 x[ L (ϕcl ) + J ϕcl ] +
iℏ A( x ', x, ϕ0 ) tr log . 2 A( x ', x,0)
Because ϕ0 is a solution of (4.91) the difference of the two integrals in (4.105) is of order
(ϕ0 − ϕcl )2 = O(ℏ 2 ), and we can interchange ϕ0 and ϕcl at this level of accuracy. Therefore
Γ1[ϕcl ] =
(4.106)
i A( x ', x, ϕcl ) tr log . 2 A( x ', x,0)
Next, the effective potential V (ϕcl ) can be derived from Γ[ϕcl ] by setting ϕcl constant, in which case
Γ[ϕcl ] = − ∫ d 4 xV (ϕcl ).
(4.107)
Delta functions allow us to diagonalise A( x ', x, ϕcl ) which is a prerequisite to properly define the logarithmic part of (4.106). This is done as follows
(4.108) A( x ', x, ϕcl ) = −∂ x ' µ ∂ µx + µ 2 +
1 2 λϕcl δ( x '− x ) 2
d 4k 1 2 ik ( x ' − x ) d 4k 2 1 µ 2 =∫ −∂ x ' µ ∂ x + µ + λϕcl e =∫ −k + µ 2 + λϕ2cl eik ( x '− x ) 4 4 (2π) 2 (2π) 2 Hence performing log and trace operation
(4.109)
log A( x ', x, ϕcl ) = ∫ d 4 kd 4 k '
eix ' k ' 1 2 e− ixk 2 2 − + µ + λϕ δ − log k ( k ' k ) cl (2π) 2 2 (2π) 2
tr log A = ∫ d 4 xd 4 x ' δ( x '− x)log A( x ', x, ϕcl )
(4.110)
= ∫ d 4 x∫
d 4k 1 log −k 2 + µ 2 + λϕcl2 . 4 (2π) 2
Summarizing we can now state the following: the one-loop order contribution to the effective potential is
V1 (ϕcl ) =
(4.111)
−k 2 + µ 2 + 12 λϕ2cl −i d 4 k log 2 ∫ (2π) 4 −k 2 + µ 2
And while at this order, the effective potential is approximated by
1 λϕ2cl 1 1 i d 4k log , V (ϕcl ) ≈ V0 (ϕcl ) + V1 (ϕcl ) = µ2ϕ2cl + λϕcl4 − ℏ ∫ 1 − 2 2 2 4! 2 (2π) 4 2 k −µ
(4.112)
where, the Greek parameters are bare parameters, i.e. the ones present in the Lagrangian. The integral term in the potential is ultraviolet divergent and it requires dimensional regularization to deal with i.e. (see for instance Bailin and Love, p.80)
d 2 ωk 2 1 d 2ωk/ 2 −1 I (ω, µ B ) = ∫ (k − µ B + iε) = ∫ (k/ 2 + µ 2B )−1 2ω 2ω i (2π) (2π)
(4.113)
=
iµ 2B 1 µ2B 2 ω− 2 ( ) '(1) 1 log M + Γ + − + O(ω − 2) . 2 2 16π 4πM 2−ω
The derivative of the gamma function as -1 is Γ '(1) = −γ, as it can be seen by taking the logarithmic derivative of Weierstrass product (1.12) ∞ ∞ s s Γ '( s) 1 1 1 − log Γ( s) = log s + γs + ∑ log 1 + − ⇒ − = + γ + ∑ − Γ( s ) s k k k k =1 k =1 k + s
and hence
∞ 1 1 Γ '(1) = −1 − γ − ∑ − = −γ. k k =1 k + 1
By the use of the following transformations for the bare parameters
(4.114)
ϕB ( x) = Z 1/ 2ϕ( x )
Zµ 2B = µ 2 + δµ2
Z 2λ B = λ + δλ,
we can transform the above potential (4.112) to
(4.115)
1 λϕcl2 i d 4k 1 1 1 1 V (ϕcl ) = µ 2ϕcl2 + λϕcl4 + δµ 2ϕcl2 + δλϕ4cl − ℏ ∫ log , 1 − 2 2 2 4! 2 4! 2 (2π) 4 2 k −µ
with the new Greek parameters now being the renormalized parameters rather than the bare. It is important to note, however, that these parameters have constraints imposed by the MS scheme. In terms of Green functions, these develop singularities in ω − 2 and these are neutralized by the counter terms which are
(4.116)
∞ a (λˆ , M / µ ) δλ = M 4 − 2 ω a0 (λˆ , M / µ, ω) + ∑ k (2 − ω) k k =1
∞ b (λˆ , M / µ ) δµ 2 = µ 2 b0 (λˆ , M / µ, ω) + ∑ k (2 − ω) k k =1
∞ c (λˆ , M / µ) δZ = c0 (λˆ , M / µ, ω) + ∑ k (2 − ω) k k =1
2 ω− 4 with a0 , b0 and c0 regular as ω → 2 and λˆ = λM . Because we have only simple poles, the k-
integrals only have simple poles at 2 − ω. This means that ∀k > 1 ⇒ ak = bk = 0. The one-point-irreducible Green functions of the renormalized theory behave as
ˆ ɶ 2 ( p, − p) = p 2 (1 + δZ ) − µ 2 − δµ 2 + λµ (4.117) Γ 1 1 2
1 µ2 + Γ '(1) + 1 − log + O(ω − 2) . 32π 2 − ω 4πM 2
This can be shown by using
d 4k 2 I 4 (µ B ) = ∫ (k − µ 2B + iε) −1 4 (2π)
(4.118)
and (4.113) on the Feynman propagator
∆ F (0) = ∫
(4.119)
2 1 d 4k 2 µ2 2 2 ω− 4 iµ −1 ( ) '(1) 1 log ( 2) k − µ + i ε = M + Γ + − + O ω − . B (2π) 4 16π2 2 − ω 4πM
This propagator, in turn, is present in the Green function as
(4.120)
1 Γɶ 2 ( p , − p ) = p 2 (1 + δZ1 ) + µ 2 + λi∆ F (0) + δµ12 + O (λ 2 ), 2
as it can be seen by writing the Feynman diagrams to leading order. The computation of Γɶ 4 is lengthier but follows the same lines, eventually, since Γɶ 4 is finite in any renormalisation scheme we have that as ω → 2 the following holds
3λλˆ 1 − δλ 2 → const, 32π 2 2 − ω
(4.121)
with ∞
δλ = ∑ δλ k .
(4.122)
k =2
The finite part of δλ 2 is arbitrary. Going back to our discussion of (4.116) now we see that the poles in 2 − ω are neutralized by setting
(4.123)
a1 =
3 ˆ2 λ , 32π2
b1 =
Therefore putting this back together in the potential one has
1 ˆ λ. 32π 2
1 λ 3 V (ϕcl ) = µ 2ϕcl2 1 + b0 − + Γ '(1) + log 4π 2 2 32π 2
(4.124)
+
1 4 3λ 2 3 λϕcl 1 + a0 − + Γ '(1) + log 4π 2 4! 32π 2
2 µ 2 + 12 λϕcl2 1 2 1 µ2 4 + − µ log 2 . µ + λϕcl log 64π2 2 M2 M
In turns out that in the MS scheme we may choose (see Rajantie or Bailin and Love)
δλ MS =
(4.125)
a0MS =
3λλˆ 1 + Γ '(1) + log 4π + O(λ3 ) 2 32π 2 − ω
3λˆ 2 [Γ '(1) + log 4π] + O (λ 3 ), 32π 2
a1MS =
3λˆ 2 + O (λ 3 ), 32π 2
This will get rid of the Γ '(1) + log 4 π factors. However, and here is the main objective of this comparison with the zeta function regularization, we could also renormalize V (ϕcl ) by writing it as a function of physical mass and coupling constant. This phys
can be accomplished by setting a0
(4.126)
phys
and b0
d 2V d ϕcl2
such that the following hold
d 4V d ϕcl4
= µ2 , ϕcl = 0
= λ, ϕcl = M
as we did in (4.20) and (4.21). In the case that µ is small there could be radiative corrections that 2
generate spontaneous symmetry breaking. Performing the second and fourth derivatives (4.126) along with the discussion in (4.20) to (4.22) leads to the same potential (4.25), namely
(4.127)
1 λ λ 2ϕcl4 ϕcl2 25 − . V (ϕcl ) = µ 2ϕcl2 + ϕ4cl + log 2 4! 256π2 M2 6
Effectively we have compared the zeta function regularization and the one-loop order expansion and have seen that both yield the same results in different fashions. The zeta function technique makes use of more powerful tools (specially, more sophisticated functions) whereas the one-loop expansion mostly requires Γ function and its derivative.
REFERECES The above discussion of the role of the zeta function in quantum field theory follows from Hartfield [611 to 616], Ramond [81 to 93] and the first three chapters from Elizalde, Odintsov, and Romeo. Additionally, the exposition is the progression from the ideas presented in Chapter 2 (zeta functions in quantum mechanics) combined with Chapter 3 (path integrals in quantum fields). Finally, the discussion of the effective potential in one-loop order follows from Bailin and Love [200 – 207] and other remarks
Appendix Claim 1: The following holds
2 2 2 ∫ dx1 ⋯ ∫ dxn exp iλ {( x1 − a) + ( x2 − x1 ) + ⋯ + (b − xn ) } =
i n πn iλ exp (b − a)2 n (n + 1)λ n +1
Proof. (by induction) Assume it is true for n and show it is true for n + 1
∫ dx ⋯ ∫ dx 1
=
n +1
exp iλ {( x1 − a)2 + ( x2 − x1 )2 + ⋯ + (b − xn )2 }
i n πn iλ dxn +1 exp ( xn +1 − a) 2 exp[iλ(b − xn +1 ) 2 ] n ∫ (n + 1)λ n +1
i n πn = n ( n + 1)λ
2
∫ dx
n +1
1 exp iλ ( xn +1 − a ) 2 + (b − xn +1 ) 2 , n +1
the exponential in the integrand can be worked out as follows
1 ( xn +1 − a ) 2 + (b − xn +1 ) 2 n +1
n+2 2 y − 2 y (b − a ) + (b − a ) 2 xn +1 − a = y n + 1 =
2
n+2 n +1 1 = y− (b − a) + (b − a)2 . n +1 n+2 n+2 Finally, let λ − (( n + 1) /( n + 2))(b − a ) = z so that the integral becomes
i n πn iλ i n +1 πn +1 n+2 2 iλ 2 dz i z b a exp λ + ( − ) = exp (b − a) 2 , 1 n ∫ n + n+2 (n + 1)λ (n + 1 + 1)λ n +1 n + 2 and this concludes the proof by induction.
□
Claim 3: If M is a symmetric × matrix with real-valued elements M ij and q and J are component vectors with components qi and J i respectively, then /2 1 (2 π ) 1 exp J T M −1J Z ( J ) = ∫ d q exp − q T Mq + J T q = det M 2 2
ɶ Λ T where the following relations hold Proof. The process is to diagonalize the matrix M as M = Λ M
mɶ 1 ⋯ 0 ɶ = ⋮ ⋱ ⋮ , Λ Λ = 1 det Λ = 1 M 0 ⋯ mɶ T
the integral becomes
1 ɶ Λ T q + J T q , Z ( J ) = ∫ d q exp − q T Λ M 2
ɶ = ΛT q and Jɶ = Λ T J from which it follows next we define the following q d q = d qɶ det Λ = d qɶ. 1 1 ɶ ɶ + Jɶ T qɶ = ∫ d qɶi exp ∑ − mɶ i qɶi2 + Jɶi qɶi Z ( J ) = ∫ d q exp − qɶ T Mq 2 i 2 ∞ Jɶ 2 1 2π exp i = ∏ ∫ d qɶi exp ∑ − mɶ i qɶi2 + Jɶi qɶi = ∏ i =1 −∞ i 2 i =1 mɶ i 2mɶ i
= (2π)
/2
∏ mɶ i i
−1/ 2
J2 exp ∑ i . i 2mɶ i
Finally we note that the inverse of the diagonal matrix is
mɶ 1−1 ⋯ 0 ɶ −1 = ⋮ ⋱ ⋮ M 0 ⋯ mɶ −1
and therefore the last product is the determinant of the matrix
∏ mɶ
i
ɶ = det M , = det M
i
and consequently the result follows
Z (J ) =
(2 π ) / 2
/2 1 (2 π ) 1 exp Jɶ T M −1Jɶ = exp J T M −1J . det M det M 2 2
□
Claim: One has the following
δ 2 S Euc 1 φ ( x2 ) WE [ J ] = E exp {S Euc [φ( x0 ), J ]} ∫ Dφ exp − ∫ d 4 x1d 4 x2 φ( x1 ) δφ( x1 )δφ( x2 ) 2
= 'E
exp {SEuc [φ( x0 ), J ]} det (−∂µ ∂ µ + m2 + V ''[φ( x0 )])δ( x1 − x2 )
.
Proof. This follows from Claim 3 with the action
1 1 S E [φ, J ] = ∫ d 4 x ∂ µ φ∂ µ φ + m 2 φ 2 + V ( φ) − J φ 2 2 as expanded on (3.54) and using
δ2 SEuc = δ( x1 − x2 ) −∂µ ∂ µ + m2 + V ''[φ( x1 )] δφ( x1 )δφ( x2 ) δ 2 S Euc 1 WE [ J ] = E exp {S Euc [φ( x0 ), J ]} ∫ Dφ exp − ∫ d 4 x1d 4 x2 φ( x1 ) φ ( x2 ) □ δφ( x1 )δφ( x2 ) 2 The formulas
coth x =
∞ 1 (2 x)2 k 1 2x 1 + 2∑ B2 k and coth πx = + ∑ k 2 x πx π n ≥1 x + n 2 k =1 (2k )
are almost always quoted as Gradshteyn and Ryzhik, p.35. The first one represents the Laurent series. There is very little added value in reproducing the proofs here.
Grassman umbers Let us introduce some notation first. For convenience, we will the same equation labelling as in Chapter 2 to ease the read. The following presentation about Grassmann numbers follows the
notes of A. Rajantie and Peskin and Schroeder. Ordinary commuting numbers will be denoted cnumbers (these can be real or complex). Now let n generators {θ1 ,⋯, θn } satisfy the anti-commutation relations
{θi , θ j } = 0 ∀i, j.
(2.70)
Then the set of the linear combinations of {θi } with the c-number coefficient is called the Grassmann number and the algebra generated by {θi } is called the Grassmann algebra, denoted by Λ n . Let us taken an arbitrary element g of this algebra expand it as n
(2.71)
g (θ) = g0 + ∑ gi θi + ∑ gij θi θ j + ⋯ = i =1
i< j
1
∑ k!∑ g
0≤ k ≤ n
i1 ,⋯, ik
θi1 ⋯θik ,
{i}
where g0 , gi , gij ,⋯ and g i1 ,⋯ ,ik are c-numbers that are anti-symmetric under the exchange of two indices. Additionally, we can write g as
(2.72)
g ( θ) =
∑ gɶ
ki = 0,1
k1 ,⋯ , kn
θ1k1 ⋯ θknn .
It is impossible for the set of Grassmann numbers to be an ordered set because the generator θk does not have a magnitude. The only number that is both c-number and Grassmann number is zero, moreover, a Grassmann number commutes with a c-number. From the discussion above it follows that
(2.73)
θ2k = 0
θk1 θk2 ⋯ θkn = ε k1k2⋯kn θ1θ2 ⋯ θn θk1 θk2 ⋯ θkm = 0 (m > n). The tensor εk1k2⋯kn is the Levi-Civita symbol, defined as
εk1k2⋯kn
+1 if {k1 ⋯kn } is an even permutation of {1⋯n} = −1 if {k1 ⋯kn } is an odd permutation of {1⋯ n} . 0 otherwise
Functions of Grassmann numbers are defined in terms of Taylor expansions of the function. If n = 1 we have the simple expression eθ = 1 + θ
(2.74) since terms O(θ ) are zero. 2
Our next step is to develop the theory of differentiation and integration of Grassmann variables, this theory has a few surprising facts, for instance differentiation is the same process as integration. We assume that the differential operator acts on a function from the left, let θ i and θ j be two Grassmann variables then
∂θ j
(2.74)
∂θi
=
∂ θ j = δij . ∂θi
Similarly, we assume that the differential operator anti-commutes with θk . The product rule has the slightly different form
(2.75)
∂θ ∂θ ∂ (θ j θk ) = j θk − θ j k = δij θk − δik θ j . ∂θi ∂θi ∂θi
Moreover, the following properties hold
∂ ∂ ∂ ∂ ∂2 + = 0 ⇒ 2 = 0, ∂θi ∂θ j ∂θ j ∂θi ∂θi
(2.76)
the last equation is termed nil-potency and finally
∂ ∂ θj +θj = δ ij . ∂θ i ∂θ i
(2.77)
Let us now move to integration. To this end, we adopt the notation D for differentiation with respect to a Grassmann variable and I for integration. Let us suppose that these operations satisfy the relations ID = D I = 0,
(2.78)
D ( A ) = 0 ⇒ I ( B A ) = I ( B ) A,
where A and B are arbitrary functions of Grassmann variables. The first part of the first equation implies that the integral of a derivative gives a surface term and it is set to zero, whereas the second part implies that the derivative of an integral vanishes. The last equation implies that if the derivative of the function is zero then it can be taken out the integral. The relations are satisfied when D is proportional to I and for normalization purposes we set I = D and write
∫ dθf (θ) =
(2.79)
∂f (θ) . ∂θ
From the previous definition it follows that
∫ dθ =
(2.80)
∂1 =0 ∂θ
∫ dθθ =
∂θ = 1. ∂θ
We can generalize this for n generators {θi } as
(2.81)
∫ dθ dθ 1
2
⋯ d θ n f (θ1θ 2 ⋯ θ n ) =
∂ ∂ ∂ ⋯ f (θ1θ 2 ⋯ θ n ). ∂θ1 ∂θ 2 ∂θ n
In a theory where differentiation is equivalent to integration we would expect some strange behaviour. In order to see this behaviour we consider the simpler case when we have only one generator and we change variables θ ' = aθ where a is a complex number, then one has
∫ dθf (θ) =
(2.82)
∂f (θ) ∂f (θ '/ a) = = a ∫ d θ ' f (θ '/ a) ∂θ ∂θ '/ a
which implies that d θ ' = (1 / a ) d θ . The extension to the general case with n generators yields
θi → θi ' = aij θ j and hence (2.83)
∫ d θ1d θ2 ⋯ d θn f (θ) =
n ∂θ ' ∂θ kn ' ∂ ∂ ∂ ∂ ∂ k1 ⋯ f ( θ) = ∑ ⋯ ⋯ f ( a −1θ ') ∂θ1 ∂θ 2 ∂θ n ∂θ n ∂θ k1 ' ∂θ kn ' ki =1 ∂θ1
n
= ∑ ε k1 ⋯ kn ak1 1 ⋯ akn n ki =1
∂ ∂ ⋯ f ( a −1θ ') = det a ∫ d θ1 '⋯ d θn ' f ( a −1θ '). ∂θk1 ' ∂θkn '
Consequently the measure has the Jacobian
(2.84)
d θ1d θ2 ⋯d θn = (det a)d θ1 '⋯d θn '.
In the case of a single variable, the delta function of a Grassmann variable is defined in a similar fashion as with c-numbers defined as
(2.85)
∫ d θδ(θ − z ) f (θ) = f ( z ).
However, in the case of Grassmann variables we can obtain a closed expression for the delta function. If we set f ( z ) = a + bz in the definition of the delta function we have
(2.86)
∫ d θδ(θ − z)(a + bθ) = a + bz
and this means that (2.87)
δ ( θ − z ) = θ − z.
Again, we can extend this to n generators if we are careful about the order of the variables
δn (θ − z ) = (θn − zn )⋯ (θ2 − z2 )(θ1 − z1 ).
(2.88)
We can find the integral of the delta function by considering complex Grassmann variables which we proceed to develop later. Consider
∫ dξ e
(2.89)
iξθ
= ∫ d ξ (1 + iξθ) = iθ
so that we have
δ(θ) = θ = −i ∫ d ξ eiξθ .
(2.90)
One of the most crucial developments of the Grassmann variables is the Grassmann Gaussian integral which will be fundamental when developing the path integral formalism of fermions. Let us evaluate the following integral
I = ∫ d θ1*d θ1 ⋯ d θ*n d θn exp ∑ θ*i M ij θ j ij
(2.91)
{ }
where it is important to stress that {θi } and θ*i are two independent sets of Grassmann variables. * Since Grassmann variables θ i and θi anti-commute we can take the n × n c-number matrix M to be
anti-symmetric. The formula for the transformation of the measure solves the problem of the computation. Set θi ' =
(2.92)
∑
j
M ij θ j this yields
n I = det M ∫ d θ1*d θ1 ⋯d θ*n d θn exp ∑ θ*i θi ' = det M ∫ d θ*d θ(1 + θ ' θ* ) = det M . i
Complex conjugation is defined as
(2.93)
(θi )* = θ*i
In the case of Grassmann variables we have
(θ*i )* = θi .
(θi θ j )* = θ*j θ*i .
(2.94)
The reasoning behind (2.94) is that the real c-number θi θi does not satisfy (θi θi ) = θi θi . *
* *
*
Let us recall that the annihilation and creation operators c and c † satisfy the anti-commutation relations {c, c } = 1 and {c, c} = {c , c } = 0 and that the number operator = c † c has eigenvectors †
†
†
0 and 1 . We are now in a position to study the Hilbert space Ω spanned by these vectors, i.e. Ω = span { 0 , 1 }.
(2.95)
An arbitrary vector ω ∈ Ω can be written in the form
ω = 0 ω0 + 1 ω1 ,
(2.96)
with ωi ∈ ℂ where i = 1, 2 . Next we define the coherent states
θ = 0 + 1θ
(2.97)
θ = 0 + θ* 1
where θ and θ * are Grassmann numbers. The coherent states are eigenstates of c and c † respectively, that is
(2.98)
c θ = 0 θ = θ θ,
θ c† = θ* 0 = θ* θ .
It can be shown fairly easily that the following identities hold
(2.99)
*
θ ' | θ = 1 + θ '* θ = eθ ' θ , θ | g = g0 + θ* g1 , θ | c † | g = θ |1 g 0 = θ* g 0 = θ* θ | g ,
θ | c | g = θ |0 g1 =
∂ θ|g . ∂θ*
Finally, we show how matrix elements are represented and the completeness relation. Let
h(c, c† ) = h00 + h10c† + h01c + h11c†c
(2.100)
hij ∈ ℂ
be an arbitrary function of c and c † . The complex matrix elements of h can be written in terms of scalar products as
0 | h |0 = h00
(2.101)
0 | h |1 = h01
1| h |0 = h10
1| h |1 = h00 + h11.
From these scalar products we can form the more general product *
θ | h | θ ' = (h00 + θ* h10 + h01θ '+ θ*θ ' h11 )eθ θ ' .
(2.102)
Moreover, one has
(2.103)
∫ dθ dθ θ *
θ e − θ θ = ∫ d θ* d θ ( 0 + 1 θ ) ( 0 + θ* 1 )(1 − θ*θ ) *
= ∫ d θ* d θ ( 0 0 + 1 θ 0 + 0 θ* 1 + 1 θθ* 1 )(1 − θ*θ ) = 0 0 + 1 1 = I, and therefore the completeness relation is
(2.104)
∫ dθ dθ θ *
*
θ e −θ θ = I .
REFERECES Apostol, Tom “Introduction to Analytic Number Theory” Bailin and Love “Introduction to Gauge Field Theory” Elizalde, Odintsov, and Romeo “Zeta Regularization Techniques with Applications” Freitag, Eberhard and Busam, Rolf “Complex Analysis” Grosche C. and Steiner F, “Handbook of Feynman Path Integrals” Ingham, A.E. “The Distribution of Prime Numbers” Kleinert, Hagen “Path Integrals in Quantum Mechanics, Statistic, Polymer Physics and Financial Markets” Hatfield, Brian “Quantum Field Theory of Point Particles and Strings” Peskin, Michael E. and Schroeder, Daniel V. “An Introduction to Quantum Field Theory” Rajantie, Arttu “Advanced quantum field theory Course Notes” Ramond, Pierre “Field Theory: A Modern Primer” Rivers, Ray J. “Path Integral Methods in Quantum Field Theory” Sondow, Jonathan and Weisstein, Eric W. "Riemann Zeta Function." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/RiemannZetaFunction.html Titchmarsh, E. C. and Heath-Brown, Roger “The Theory of the Riemann Zeta-Function“ Weinberg, Steven “The Quantum Theory of Fields: Volume I Foundations” Wikipedia http://en.wikipedia.org/wiki/Gamma_function Whitaker and Watson “A Course on Modern Analysis”