notes on quantum chemistry

76
Notes on Quantum Chemistry http://vergil.chemistry.gatech.edu/notes/ These notes on Quantum Chemistry were written by Professor David Sherrill as a resource for students at the college through post-graduate level interested in quantum mechanics and how it is used in chemistry. A Brief Review of Elementary Quantum Chemistry has been used by college students and curious high-school students worldwide. Other notes have been used in the NSF Workshop on Theoretical and Computational Chemistry , a week-long workshop which introduces faculty at 2-year and 4-year colleges to modern techniques in chemical theory and computation for incorporation into their research, classroom teaching, and laboratory modules. Introductory Quantum Chemistry A Brief Review of Elementary Quantum Chemistry (PDF ) Computational Quantum Chemistry An Introduction to Electronic Structure Theory (PDF ) Atomic Term Symbols Term Symbol Example Elementary Linear Algebra Computational Chemistry Potential Energy Surfaces Short Introduction to Hartree-Fock Molecular Orbital Theory Basis Sets Introduction to Electron Correlation Geometry Optimization Molecular Vibrations Computing Thermodynamic Quantities

Upload: lina-silaban

Post on 03-Nov-2014

205 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: Notes on Quantum Chemistry

Notes on Quantum Chemistry

http://vergil.chemistry.gatech.edu/notes/

These notes on Quantum Chemistry were written by Professor David Sherrill as a resource for students at the college through post-graduate level interested in quantum mechanics and how it is used in chemistry. A Brief Review of Elementary Quantum Chemistry has been used by college students and curious high-school students worldwide. Other notes have been used in the NSF Workshop on Theoretical and Computational Chemistry, a week-long workshop which introduces faculty at 2-year and 4-year colleges to modern techniques in chemical theory and computation for incorporation into their research, classroom teaching, and laboratory modules.

Introductory Quantum Chemistry

A Brief Review of Elementary Quantum Chemistry (PDF) Computational Quantum Chemistry

An Introduction to Electronic Structure Theory (PDF)

Atomic Term Symbols

Term Symbol Example

Elementary Linear Algebra

Computational Chemistry Potential Energy Surfaces

Short Introduction to Hartree-Fock Molecular Orbital Theory

Basis Sets

Introduction to Electron Correlation

Geometry Optimization

Molecular Vibrations

Computing Thermodynamic Quantities

Nondynamical Electron Correlation

Counterpoise Correction and Basis Set Superposition Error

Practical Advice for Quantum Chemical Computations

General Quantum Chemistry The Born-Oppenheimer Approximation (PDF)

Time Evolution of Wavefunctions (PDF)

Angular Momentum Summary (PDF)

Page 2: Notes on Quantum Chemistry

The Unitary Group Form of the Hamiltonian (PDF)

Introduction to Hartree-Fock Theory (PDF)

Permutational Symmetries of Integrals (PDF)

Notes on Excited Electronic States (PDF)

The Quantum Harmonic Oscillator (PDF)

Assigning Symmetries of Vibrational Modes (PDF)

Notes on Elementary Linear Algebra (PDF)

Notes on Density Fitting (PDF)

Introductory Quantum Chemistry

A Brief Review of Elementary Quantum Chemistry (PDF)

Computational Quantum Chemistry

An Introduction to Electronic Structure Theory (PDF)

Atomic Term Symbols

Term Symbol Example

Elementary Linear Algebra

Last Revised on 27 January 2001

A Brief Review of Elementary Quantum Chemistry (PDF) Contents The Motivation for Quantum Mechanics

o The Ultraviolet Catastrophe

o The Photoelectric Effect

o Quantization of Electronic Angular Momentum

o Wave-Particle Duality

The Schrödinger Equation o The Time-Independent Schrödinger Equation

o The Time-Dependent Schrödinger Equation

Page 3: Notes on Quantum Chemistry

Mathematical Background o Operators

Operators and Quantum Mechanics

Basic Properties of Operators

Linear Operators

Eigenfunctions and Eigenvalues

Hermitian Operators

Unitary Operators

o Commutators in Quantum Mechanics

o Linear Vector Spaces in Quantum Mechanics

Postulates of Quantum Mechanics Some Analytically Soluble Problems

o The Particle in a Box

o The Harmonic Oscillator

o The Rigid Rotor

o The Hydrogen Atom

Approximate Methods o Perturbation Theory

o The Variational Method

Molecular Quantum Mechanics o The Molecular Hamiltonian

o The Born-Oppenheimer Approximation

o Separation of the Nuclear Hamiltonian

Solving the Electronic Eigenvalue Problem o The Nature of Many-Electron Wavefunctions

o Matrix Mechanics

Page 4: Notes on Quantum Chemistry

Bibliography About this document ...

The Motivation for Quantum Mechanics

Physicists at the end of the nineteenth century believed that most of the fundamental physical laws had been worked out. They expected only minor refinements to get ``an extra decimal place'' of accuracy. As it turns out, the field of physics was transformed profoundly in the early twentieth century by Einstein's discovery of relativity and by the development of quantum mechanics. While relativity has had fairly little impact on chemistry, all of theoretical chemistry is founded upon quantum mechanics.

The development of quantum mechanics was initially motivated by two observations which demonstrated the inadeqacy of classical physics. These are the ``ultraviolet catastrophe'' and the photoelectric effect.

The Ultraviolet Catastrophe A blackbody is an idealized object which absorbs and emits all frequencies. Classical physics can be used to derive an equation which describes the intensity of blackbody radiation as a function of frequency for a fixed temperature--the result is known as the Rayleigh-Jeans law. Although the

Rayleigh-Jeans law works for low frequencies, it diverges as ; this divergence for high frequencies is called the ultraviolet catastrophe.

Max Planck explained the blackbody radiation in 1900 by assuming that the energies of the oscillations of electrons which gave rise to the radiation must be proportional to integral multiples of the frequency, i.e.,

(1)

Using statistical mechanics, Planck derived an equation similar to the Rayleigh-Jeans

equation, but with the adjustable parameter . Planck found that for

J s, the experimental data could be reproduced. Nevertheless, Planck could not offer a good justification for his assumption of energy quantization. Physicicsts did not take this energy quantization idea seriously until Einstein invoked a similar assumption to explain the photoelectric effect.

Page 5: Notes on Quantum Chemistry

The Photoelectric Effect In 1886 and 1887, Heinrich Hertz discovered that ultraviolet light can cause electrons to be ejected from a metal surface. According to the classical wave theory of light, the intensity of the light determines the amplitude of the wave, and so a greater light intensity should cause the electrons on the metal to oscillate more violently and to be ejected with a greater kinetic energy. In contrast, the experiment showed that the kinetic energy of the ejected electrons depends on the frequency of the light. The light intensity affects only the number of ejected electrons and not their kinetic energies.

Einstein tackled the problem of the photoelectric effect in 1905. Instead of assuming that the electronic oscillators had energies given by Planck's formula (1), Einstein assumed that the

radiation itself consisted of packets of energy , which are now called photons. Einstein successfully explained the photoelectric effect using this assumption, and he

calculated a value of close to that obtained by Planck.

Two years later, Einstein showed that not only is light quantized, but so are atomic vibrations.

Classical physics predicts that the molar heat capacity at constant volume ( ) of a crystal is , where is the molar gas constant. This works well for high temperatures, but for low

temperatures actually falls to zero. Einstein was able to explain this result by assuming that the oscillations of atoms about their equilibrium positions are quantized according to

, Planck's quantization condition for electronic oscillators. This demonstrated that the energy quantization concept was important even for a system of atoms in a crystal, which should be well-modeled by a system of masses and springs (i.e., by classical mechanics).

Quantization of Electronic Angular Momentum Rutherford proposed that electrons orbit about the nucleus of an atom. One problem with this model is that, classically, orbiting electrons experience a centripetal acceleration, and accelerating charges lose energy by radiating; a stable electronic orbit is classically forbidden. Bohr nevertheless assumed stable electronic orbits with the electronic angular momentum quantized as

(2)

Quantization of angular momentum means that the radius of the orbit and the energy will be quantized as well. Bohr assumed that the discrete lines seen in the spectrum of the hydrogen atom were due to transitions of an electron from one allowed orbit/energy to another. He further assumed that the energy for a transition is acquired or released in the form of a photon as proposed by Einstein, so that

(3)

Page 6: Notes on Quantum Chemistry

This is known as the Bohr frequency condition. This condition, along with Bohr's expression for the allowed energy levels, gives a good match to the observed hydrogen atom spectrum. However, it works only for atoms with one electron.

Wave-Particle Duality Einstein had shown that the momentum of a photon is

(4)

This can be easily shown as follows. Assuming for a photon and for an electromagnetic wave, we obtain

(5)

Now we use Einstein's relativity result to find

(6)

which is equivalent to equation (4). Note that refers to the relativistic mass, not the rest mass, since the rest mass of a photon is zero. Since light can behave both as a wave (it can be diffracted,

and it has a wavelength), and as a particle (it contains packets of energy ), de Broglie reasoned in 1924 that matter also can exhibit this wave-particle duality. He further reasoned that matter would obey the same equation (4) as light. In 1927, Davisson and Germer observed diffraction patterns by bombarding metals with electrons, confirming de Broglie's proposition.

de Broglie's equation offers a justification for Bohr's assumption (2). If we think of an electron as a wave, then for the electron orbit to be stable the wave must complete an integral number of wavelengths during its orbit. Otherwise, it would interfere destructively with itself. This condition may be written as

(7)

Page 7: Notes on Quantum Chemistry

If we use the de Broglie relation (4), this can be rewritten as

(8)

which is identical to Bohr's equation (2).

Although de Broglie's equation justifies Bohr's quantization assumption, it also demonstrates a deficiency of Bohr's model. Heisenberg showed that the wave-particle duality leads to the famous uncertainty principle

(9)

One result of the uncertainty principle is that if the orbital radius of an electron in an atom

is known exactly, then the angular momentum must be completely unknown. The

problem with Bohr's model is that it specifies exactly and it also specifies that the orbital

angular momentum must be an integral multiple of . Thus the stage was set for a new quantum theory which was consistent with the uncertainty principle.

The Schrödinger Equation In 1925, Erwin Schrödinger and Werner Heisenberg independently developed the new quantum theory. Schrödinger's method involves partial differential equations, whereas Heisenberg's method employs matrices; however, a year later the two methods were shown to be mathematically equivalent. Most textbooks begin with Schrödinger's equation, since it seems to have a better physical interpretation via the classical wave equation. Indeed, the Schrödinger equation can be viewed as a form of the wave equation applied to matter waves.

The Time-Independent Schrödinger Equation Here we follow the treatment of McQuarrie [1], Section 3-1. We start with the one-dimensional classical wave equation,

(10)

Page 8: Notes on Quantum Chemistry

By introducing the separation of variables

(11)

we obtain

(12)

If we introduce one of the standard wave equation solutions for such as (the constant can be taken care of later in the normalization), we obtain

(13)

Now we have an ordinary differential equation describing the spatial amplitude of the matter wave as a function of position. The energy of a particle is the sum of kinetic and potential parts

(14)

which can be solved for the momentum, , to obtain

(15)

Now we can use the de Broglie formula (4) to get an expression for the wavelength

Page 9: Notes on Quantum Chemistry

(16)

The term in equation (13) can be rewritten in terms of if we recall that and

.

(17)

When this result is substituted into equation (13) we obtain the famous time-independent Schrödinger equation

(18)

which is almost always written in the form

(19)

This single-particle one-dimensional equation can easily be extended to the case of three dimensions, where it becomes

(20)

A two-body problem can also be treated by this equation if the mass is replaced with a reduced

mass .

Page 10: Notes on Quantum Chemistry

It is important to point out that this analogy with the classical wave equation only goes so far. We cannot, for instance, derive the time-dependent Schrödinger equation in an analogous fashion (for instance, that equation involves the partial first derivative with respect to time instead of the partial second derivative). In fact, Schrödinger presented his time-independent equation first, and then went back and postulated the more general time-dependent equation.

The Time-Dependent Schrödinger Equation We are now ready to consider the time-dependent Schrödinger equation. Although we were able to derive the single-particle time-independent Schrödinger equation starting from the classical wave equation and the de Broglie relation, the time-dependent Schrödinger equation cannot be derived using elementary methods and is generally given as a postulate of quantum mechanics. It is possible to show that the time-dependent equation is at least reasonable if not derivable, but the arguments are rather involved (cf. Merzbacher [2], Section 3.2; Levine [3], Section 1.4).

The single-particle three-dimensional time-dependent Schrödinger equation is

(21)

where is assumed to be a real function and represents the potential energy of the system (a

complex function will act as a source or sink for probability, as shown in Merzbacher [2], problem 4.1). Wave Mechanics is the branch of quantum mechanics with equation (21) as its dynamical law. Note that equation (21) does not yet account for spin or relativistic effects.

Of course the time-dependent equation can be used to derive the time-independent equation. If we write the wavefunction as a product of spatial and temporal terms,

, then equation (21) becomes

(22)

or

(23)

Page 11: Notes on Quantum Chemistry

Since the left-hand side is a function of only and the right hand side is a function of only, the two

sides must equal a constant. If we tentatively designate this constant (since the right-hand side clearly must have the dimensions of energy), then we extract two ordinary differential equations, namely

(24)

and

(25)

The latter equation is once again the time-independent Schrödinger equation. The former equation is easily solved to yield

(26)

The Hamiltonian in equation (25) is a Hermitian operator, and the eigenvalues of a Hermitian

operator must be real, so is real. This means that the solutions are purely oscillatory,

since never changes in magnitude (recall Euler's formula ). Thus if

(27)

then the total wave function differs from only by a phase factor of constant

magnitude. There are some interesting consequences of this. First of all, the quantity is time independent, as we can easily show:

Page 12: Notes on Quantum Chemistry

(28)

Secondly, the expectation value for any time-independent operator is also time-independent, if

satisfies equation (27). By the same reasoning applied above,

(29)

For these reasons, wave functions of the form (27) are called stationary states. The state is ``stationary,'' but the particle it describes is not!

Of course equation (27) represents a particular solution to equation (21). The general solution to equation (21) will be a linear combination of these particular solutions, i.e.

Postulates of Quantum Mechanics In this section, we will present six postulates of quantum mechanics. Again, we follow the presentation of McQuarrie [1], with the exception of postulate 6, which McQuarrie does not include. A few of the postulates have already been discussed in section 3. Postulate 1. The state of a quantum mechanical system is completely specified by a function

that depends on the coordinates of the particle(s) and on time. This function, called

the wave function or state function, has the important property that is

the probability that the particle lies in the volume element located at at time .

The wavefunction must satisfy certain mathematical conditions because of this probabilistic interpretation. For the case of a single particle, the probability of finding it somewhere is 1, so that we have the normalization condition

(110)

Page 13: Notes on Quantum Chemistry

It is customary to also normalize many-particle wavefunctions to 1.2 The wavefunction must also be single-valued, continuous, and finite. Postulate 2. To every observable in classical mechanics there corresponds a linear, Hermitian operator in quantum mechanics.

This postulate comes about because of the considerations raised in section 3.1.5: if we require

that the expectation value of an operator is real, then must be a Hermitian operator. Some common operators occuring in quantum mechanics are collected in Table 1.

Table 1: Physical observables and their corresponding quantum operators (single particle)

Observable Observable Operator Operator

Name Symbol Symbol Operation

Position Multiply by

Momentum

Kinetic energy

Potential energyMultiply by

Total energy

Angular momentum

 

 

Postulate 3. In any measurement of the observable associated with operator , the only

values that will ever be observed are the eigenvalues , which satisfy the eigenvalue equation

(111)

Page 14: Notes on Quantum Chemistry

This postulate captures the central point of quantum mechanics--the values of dynamical variables can be quantized (although it is still possible to have a continuum of eigenvalues in

the case of unbound states). If the system is in an eigenstate of with eigenvalue , then

any measurement of the quantity will yield .

Although measurements must always yield an eigenvalue, the state does not have to be an

eigenstate of initially. An arbitrary state can be expanded in the complete set of

eigenvectors of ( as

(112)

where may go to infinity. In this case we only know that the measurement of will yield

one of the values , but we don't know which one. However, we do know the probability

that eigenvalue will occur--it is the absolute value squared of the coefficient, (cf. section 3.1.4), leading to the fourth postulate below.

An important second half of the third postulate is that, after measurement of yields some

eigenvalue , the wavefunction immediately ``collapses'' into the corresponding eigenstate

(in the case that is degenerate, then becomes the projection of onto the degenerate subspace). Thus, measurement affects the state of the system. This fact is used in many elaborate experimental tests of quantum mechanics.

Postulate 4. If a system is in a state described by a normalized wave function , then the

average value of the observable corresponding to is given by

(113)

Postulate 5. The wavefunction or state function of a system evolves in time according to the time-dependent Schrödinger equation

(114)

The central equation of quantum mechanics must be accepted as a postulate, as discussed in section 2.2.

Page 15: Notes on Quantum Chemistry

Postulate 6. The total wavefunction must be antisymmetric with respect to the interchange of all coordinates of one fermion with those of another. Electronic spin must be included in this set of coordinates.

The Pauli exclusion principle is a direct result of this antisymmetry principle. We will later see that Slater determinants provide a convenient means of enforcing this property on electronic wavefunctions.

The Particle in a Box Consider a particle constrained to move in a single dimension, under the influence of a potential

which is zero for and infinite elsewhere. Since the wavefunction is not allowed

to become infinite, it must have a value of zero where is infinite, so is nonzero only

within . The Schrödinger equation is thus

(115)

It is easy to show that the eigenvectors and eigenvalues of this problem are

(116)

(117)

Extending the problem to three dimensions is rather straightforward; see McQuarrie [1], section 6.1.

The Harmonic Oscillator

Now consider a particle subject to a restoring force , as might arise for a mass-spring system obeying Hooke's Law. The potential is then

Page 16: Notes on Quantum Chemistry

(118)

If we choose the energy scale such that then . This potential is also appropriate for describing the interaction of two masses connected by an ideal spring. In this case,

we let be the distance between the masses, and for the mass we substitute the reduced mass

. Thus the harmonic oscillator is the simplest model for the vibrational motion of the atoms in a diatomic molecule, if we consider the two atoms as point masses and the bond between them as a spring. The one-dimensional Schrödinger equation becomes

(119)

After some effort, the eigenfunctions are

(120)

where is the Hermite polynomial of degree , and and are defined by

(121)

The eigenvalues are

(122)

Page 17: Notes on Quantum Chemistry

with .

The Rigid Rotor The rigid rotor is a simple model of a rotating diatomic molecule. We consider the diatomic to consist of two point masses at a fixed internuclear distance. We then reduce the model to a one-dimensional system by considering the rigid rotor to have one mass fixed at the origin, which is

orbited by the reduced mass , at a distance . The Schrödinger equation is (cf. McQuarrie [1], section 6.4 for a clear explanation)

(123)

After a little effort, the eigenfunctions can be shown to be the spherical harmonics , defined by

(124)

where are the associated Legendre functions. The eigenvalues are simply

(125)

Each energy level is -fold degenerate in , since can have values

.

Page 18: Notes on Quantum Chemistry

The Hydrogen Atom Finally, consider the hydrogen atom as a proton fixed at the origin, orbited by an electron of reduced

mass . The potential due to electrostatic attraction is

(126)

in SI units. The kinetic energy term in the Hamiltonian is

(127)

so we write out the Schrödinger equation in spherical polar coordinates as

(128)

It happens that we can factor into , where are again

the spherical harmonics. The radial part then can be shown to obey the equation

(129)

which is called the radial equation for the hydrogen atom. Its (messy) solutions are

(130)

Page 19: Notes on Quantum Chemistry

where , and is the Bohr radius, . The functions

are the associated Laguerre functions. The hydrogen atom eigenvalues are

(131)

There are relatively few other interesting problems that can be solved analytically. For molecular systems, one must resort to approximate solutions.

Approximate Methods The problems discussed in the previous section (harmonic oscillator, rigid rotator, etc.) are some of the few quantum mechanics problems which can be solved analytically. For the vast majority of chemical applications, the Schrödinger equation must be solved by approximate methods. The two primary approximation techniques are the variational method and perturbation theory.

Subsections Perturbation Theory The Variational Method

Perturbation Theory

The basic idea of perturbation theory is very simple: we split the Hamiltonian into a piece we know how to solve (the ``reference'' or ``unperturbed'' Hamiltonian) and a piece we don't know how to solve (the ``perturbation''). As long as the perburbation is small compared to the unperturbed Hamiltonian, perturbation theory tells us how to correct the solutions to the unperturbed problem to approximately account for the influence of the perturbation. For example, perturbation theory can be used to approximately solve an anharmonic oscillator problem with the Hamiltonian

(132)

Page 20: Notes on Quantum Chemistry

Here, since we know how to solve the harmonic oscillator problem (see 5.2), we make that part the

unperturbed Hamiltonian (denoted ), and the new, anharmonic term is the perturbation

(denoted ):

(133)

(134)

Perturbation theory solves such a problem in two steps. First, obtain the eigenfunctions and

eigenvalues of the unperturbed Hamiltonian, :

(135)

Second, correct these eigenvalues and/or eigenfunctions to account for the perturbation's influence. Perturbation theory gives these corrections as an infinite series of terms, which become smaller and smaller for well-behaved systems:

(136)

(137)

Quite frequently, the corrections are only taken through first or second order (i.e., superscripts (1) or (2)). According to perturbation theory, the first-order correction to the energy is

(138)

and the second-order correction is

Page 21: Notes on Quantum Chemistry

(139)

One can see that the first-order correction to the wavefunction, , seems to be needed to

compute the second-order energy correction. However, it turns out that the correction can be written in terms of the zeroth-order wavefunction as

(140)

Substituting this in the expression for , we obtain

(141)

Going back to the anharmonic oscillator example, the ground state wavefunction for the unperturbed problem is just (from section 5.2)

(142)

(143)

(144)

The first-order correction to the ground state energy would be

Page 22: Notes on Quantum Chemistry

(145)

It turns out in this case that , since the integrand is odd. Does this mean that the anharmonic energy levels are the same as for the harmonic oscillator? No, because there are

higher-order corrections such as which are not necessarily zero.

The Variational Method

The variational method is the other main approximate method used in quantum mechanics. Compared to perturbation theory, the variational method can be more robust in situations where it's hard to determine a good unperturbed Hamiltonian (i.e., one which makes the perturbation small but is still solvable). On the other hand, in cases where there is a good unperturbed Hamiltonian, perturbation theory can be more efficient than the variational method.

The basic idea of the variational method is to guess a ``trial'' wavefunction for the problem, which consists of some adjustable parameters called ``variational parameters.'' These parameters are adjusted until the energy of the trial wavefunction is minimized. The resulting trial wavefunction and its corresponding energy are variational method approximations to the exact wavefunction and energy.

Why would it make sense that the best approximate trial wavefunction is the one with the lowest energy? This results from the Variational Theorem, which states that the energy of any

trial wavefunction is always an upper bound to the exact ground state energy . This can be proven easily. Let the trial wavefunction be denoted . Any trial function can formally be

expanded as a linear combination of the exact eigenfunctions . Of course, in practice, we

don't know the , since we're assuming that we're applying the variational method to a problem we can't solve analytically. Nevertheless, that doesn't prevent us from using the exact eigenfunctions in our proof, since they certainly exist and form a complete set, even if we don't happen to know them. So, the trial wavefunction can be written

(146)

Page 23: Notes on Quantum Chemistry

and the approximate energy corresponding to this wavefunction is

(147)

Substituting the expansion over the exact wavefuntions,

(148)

Since the functions are the exact eigenfunctions of , we can use to obtain

(149)

Now using the fact that eigenfunctions of a Hermitian operator form an orthonormal set (or can be made to do so),

(150)

We now subtract the exact ground state energy from both sides to obtain

(151)

Since every term on the right-hand side is greater than or equal to zero, the left-hand side must also be greater than or equal to zero, or

Page 24: Notes on Quantum Chemistry

(152)

In other words, the energy of any approximate wavefunction is always greater than or equal to the

exact ground state energy . This explains the strategy of the variational method: since the energy of any approximate trial function is always above the true energy, then any variations in the trial function which lower its energy are necessarily making the approximate energy closer to the exact answer. (The trial wavefunction is also a better approximation to the true ground state wavefunction

as the energy is lowered, although not necessarily in every possible sense unless the limit is reached).

One example of the variational method would be using the Gaussian function

as a trial function for the hydrogen atom ground state. This problem could be

solved by the variational method by obtaining the energy of as a function of the

variational parameter , and then minimizing to find the optimum value . The variational theorem's approximate wavefunction and energy for the hydrogen atom would

then be and .

Frequently, the trial function is written as a linear combination of basis functions, such as

(153)

This leads to the linear variation method, and the variational parameters are the expansion

coefficients . The energy for this approximate wavefunction is just

(154)

which can be simplified using the notation

Page 25: Notes on Quantum Chemistry

(155)

(156)

to yield

(157)

Differentiating this energy with respect to the expansion coefficients yields a non-trivial solution only if the following ``secular determinant'' equals 0.

(158)

If an orthonormal basis is used, the secular equation is greatly simplified because is 1 for

and 0 for . In this case, the secular determinant is

(159)

In either case, the secular determinant for basis functions gives an -th order polynomial in

which is solved for different roots, each of which approximates a different eigenvalue.

Page 26: Notes on Quantum Chemistry

The variational method lies behind Hartree-Fock theory and the configuration interaction method for the electronic structure of atoms and molecules.

Molecular Quantum Mechanics In this section, we discuss the quantum mechanics of atomic and molecular systems. We begin by writing the Hamiltonian for a collection of nuclei and electrons, and then we introduce the Born-Oppenheimer approximation, which allows us to separate the nuclear and electronic degrees of freedom.

Subsections The Molecular Hamiltonian The Born-Oppenheimer Approximation

Separation of the Nuclear Hamiltonian

Molecular Quantum Mechanics In this section, we discuss the quantum mechanics of atomic and molecular systems. We begin by writing the Hamiltonian for a collection of nuclei and electrons, and then we introduce the Born-Oppenheimer approximation, which allows us to separate the nuclear and electronic degrees of freedom.

Subsections The Molecular Hamiltonian The Born-Oppenheimer Approximation

Separation of the Nuclear Hamiltonian

The Molecular Hamiltonian We have noted before that the kinetic energy for a system of particles is

(160)

The potential energy for a system of charged particles is

Page 27: Notes on Quantum Chemistry

(161)

For a molecule, it is reasonable to split the kinetic energy into two summations--one over electrons, and one over nuclei. Similarly, we can split the potential energy into terms representing interactions

between nuclei, between electrons, or between electrons and nuclei. Using and to index

electrons, and and to index nuclei, we have (in atomic units)

(162)

where , , and . This is known as the ``exact'' nonrelativistic Hamiltonian in field-free space. However, it is important to remember that this Hamiltonian neglects at least two effects. Firstly, although the speed of an electron in a hydrogen atom is less than 1% of the speed of light, relativistic mass corrections can become appreciable for the inner electrons of heavier atoms. Secondly, we have neglected the spin-orbit effects. From the point of view of an electron, it is being orbited by a nucleus which produces a magnetic field (proportional to L); this field interacts with the electron's magnetic moment

(proportional to S), giving rise to a spin-orbit interaction (proportional to for a diatomic.) Although spin-orbit effects can be important, they are generally neglected in quantum chemical calculations.

The Born-Oppenheimer Approximation We know that if a Hamiltonian is separable into two or more terms, then the total eigenfunctions are products of the individual eigenfunctions of the separated Hamiltonian terms, and the total eigenvalues are sums of individual eigenvalues of the separated Hamiltonian terms.

Consider, for example, a Hamiltonian which is separable into two terms, one involving

coordinate and the other involving coordinate .

(163)

with the overall Schrödinger equation being

Page 28: Notes on Quantum Chemistry

(164)

If we assume that the total wavefunction can be written in the form

, where and are eigenfunctions of and

with eigenvalues and , then

(165)

Thus the eigenfunctions of are products of the eigenfunctions of and , and the

eigenvalues are the sums of eigenvalues of and .

If we examine the nonrelativistic Hamiltonian (162), we see that the term

(166)

prevents us from cleanly separating the electronic and nuclear coordinates and writing the total

wavefunction as , where represents the set of all electronic

Page 29: Notes on Quantum Chemistry

coordinates, and represents the set of all nuclear coordinates. The Born-Oppenheimer approximation is to assume that this separation is nevertheless approximately correct.

Qualitatively, the Born-Oppenheimer approximation rests on the fact that the nuclei are much more massive than the electrons. This allows us to say that the nuclei are nearly fixed with

respect to electron motion. We can fix , the nuclear configuration, at some value , and

solve for ; the electronic wavefunction depends only parametrically on . If we do this for a range of , we obtain the potential energy curve along which the nuclei move.

We now show the mathematical details. Let us abbreviate the molecular Hamiltonian as

(167)

where the meaning of the individual terms should be obvious. Initially, can be neglected

since is smaller than by a factor of , where is the mass of an electron. Thus for a fixed nuclear configuration, we have

(168)

such that

(169)

This is the ``clamped-nuclei'' Schrödinger equation. Quite frequently is neglected in the

above equation, which is justified since in this case is just a parameter so that is just a

constant and shifts the eigenvalues only by some constant amount. Leaving out of the electronic Schrödinger equation leads to a similar equation,

Page 30: Notes on Quantum Chemistry

(170)

(171)

where we have used a new subscript ``e'' on the electronic Hamiltonian and energy to distinguish

from the case where is included.

We now consider again the original Hamiltonian (167). If we insert a wavefunction of the

form , we obtain

(172)

(173)

Since contains no dependence,

(174)

However, we may not immediately assume

(175)

Page 31: Notes on Quantum Chemistry

(this point is tacitly assumed by most introductory textbooks). By the chain rule,

(176)

Using these facts, along with the electronic Schrödinger equation,

(177)

we simplify (173) to

(178

)

We must now estimate the magnitude of the last term in brackets. Following Steinfeld [5], a

typical contribution has the form , but is of the same

order as since the derivatives operate over approximately the same dimensions.

The latter is , with the momentum of an electron. Therefore

. Since , the term in brackets can be dropped, giving

(179)

Page 32: Notes on Quantum Chemistry

(180)

This is the nuclear Shrodinger equation we anticipated--the nuclei move in a potential set up by the electrons.

To summarize, the large difference in the relative masses of the electrons and nuclei allows us to approximately separate the wavefunction as a product of nuclear and electronic terms.

The electronic wavefucntion is solved for a given set of nuclear coordinates,

(181)

and the electronic energy obtained contributes a potential term to the motion of the nuclei

described by the nuclear wavefunction .

(182)

As a final note, many textbooks, including Szabo and Ostlund [4], mean total energy at fixed geometry when they use the term ``total energy'' (i.e., they neglect the nuclear kinetic

energy). This is just of equation (169), which is also plus the nuclear-nuclear repulsion. A somewhat more detailed treatment of the Born-Oppenheimer approximation is given elsewhere [6].

Separation of the Nuclear Hamiltonian The nuclear Schrödinger equation can be approximately factored into translational, rotational, and vibrational parts. McQuarrie [1] explains how to do this for a diatomic in section 10-13. The rotational part can be cast into the form of the rigid rotor model, and the vibrational part can be

Page 33: Notes on Quantum Chemistry

written as a system of harmonic oscillators. Time does not allow further comment on the nuclear Schrödinger equation, although it is central to molecular spectroscopy.

Solving the Electronic Eigenvalue Problem Once we have invoked the Born-Oppenheimer approximation, we attempt to solve the electronic Schrödinger equation (171), i.e.

(183)

But, as mentioned previously, this equation is quite difficult to solve!

Matrix Mechanics As we mentioned previously in section 2, Heisenberg's matrix mechanics, although little-discussed in elementary textbooks on quantum mechanics, is nevertheless formally equivalent to Schrödinger's wave equations. Let us now consider how we might solve the time-independent Schrödinger equation in matrix form.

If we want to solve as a matrix problem, we need to find a suitable

linear vector space. Now is an -electron function that must be antisymmetric with respect to interchange of electronic coordinates. As we just saw in the previous section, any such -electron function can be expressed exactly as a linear combination of Slater

determinants, within the space spanned by the set of orbitals . If we denote our

Slater determinant basis functions as , then we can express the eigenvectors as

(195)

for possible N-electron basis functions ( will be infinite if we actually have a complete set of one

electron functions ). Similarly, we construct the matrix in this basis by .

If we solve this matrix equation, , in the space of all possible Slater determinants as just described, then the procedure is called full configuration-interaction, or

Page 34: Notes on Quantum Chemistry

full CI. A full CI constitues the exact solution to the time-independent Schrödinger equation

within the given space of the spin orbitals . If we restrict the -electron basis set in some way, then we will solve Schrödinger's equation approximately. The method is then called ``configuration interaction,'' where we have dropped the prefix ``full.'' For more information on configuration interaction, see the lecture notes by the present author [7] or one of the available review articles [8,9].

Page 35: Notes on Quantum Chemistry

Introduction to Electronic Structure Theory

C. David Sherrill School of Chemistry and Biochemistry

Georgia Institute of Technology June 2002

Last Revised: June 2003

Introduction What is Electronic Structure Theory?

Properties Predicted by Electronic Structure Theory

Postulates of Quantum Mechanics

Dirac Notation

Slater Determinants

Simplified Notation for the Hamiltonian

Matrix Elements of the Hamiltonian and One- and Two-Electron Integrals

Bibliography

About this document ...

Introduction The purpose of these notes is to give a brief introduction to electronic structure theory and to introduce some of the commonly used notation. Chapters 1-2 of Szabo and Ostlund [1] are highly recommended for this topic.

What is Electronic Structure Theory? Electronic Structure Theory describes the motions of electrons in atoms or molecules. Generally this is done in the context of the Born-Oppenheimer Approximation, which says that electrons are so much lighter (and therefore faster) than nuclei that they will find their optimal distribution for any given nuclear configuration. The electronic energy at each nuclear configuration is the potential energy that the nuclei feel, so solving the electronic problem for a range of nuclear configurations gives the potential energy surface.

Because the electrons are so small, one needs to use quantum mechanics to solve for their motion. Quantum mechanics tells us that the electrons will not be localized at particular points in space, but they are best thought of as ``matter waves'' which can interfere. The

Page 36: Notes on Quantum Chemistry

probability of finding a single electron at a given point in space is given by for its wavefunction at the point . The wavefunction can be determined by solving the time-

independent Schrödinger equation . If the problem is time-dependent, then the

time-dependent Schrödinger equation must be used instead; otherwise, the solutions to the time-independent problem are also solutions to the time-dependent problem

when they are multiplied by the energy dependent phase factor . Since we have fixed the nuclei under the Born-Oppenheimer approximation, we solve for the nonrelativistic electronic Schrödinger equation:

(1)

where refer to electrons and refer to nuclei. In atomic units, this simplifies to:

(2)

This Hamiltonian is appropriate as long as relativistic effects are not important for the system in question. Roughly speaking, relativistic effects are not generally considered important for atoms with atomic number below about 25 (Mn). For heavier atoms, the inner electrons are held more tightly to the nucleus and have velocities which increase as the atomic number increases; as these velocities approach the speed of light, relativistic effects become more important. There are various approaches for accounting for relativistic effects, but the most popular is to use relativistic effective core potentials (RECPs), often along with the standard nonrelativistic Hamiltonian above.

Properties Predicted by Electronic Structure Theory According to one of the postulates of quantum mechanics (see below), if we know the

wavefunction for a given system, then we can determine any property of that system, at least in principle. Obviously if we use approximations in determining the wavefunction, then the properties obtained from that wavefunction will also be approximate.

Since we almost always invoke the Born-Oppenheimer approximation, we only have the electronic wavefunction, not the full wavefunction for electrons and nuclei. Therefore, some properties involving nuclear motion are not necessarily available in the context of electronic structure theory. To fully understand the details of a chemical reaction, we need to use the

Page 37: Notes on Quantum Chemistry

electronic structure results to carry out subsequent dynamics computations. Fortunately, however, quite a few properties are within the reach of just the electronic problem. For example, since the electronic energy is the potential energy felt by the nuclei, minimizing the electronic energy with respect to nuclear coordinates gives an equilibrium configuration of the molecule (which may be the global or just a local minimum).

The electronic wavefunction or its various derivatives are sufficient to determine the following properties:

Geometrical structures (rotational spectra) Rovibrational energy levels (infrared and Raman spectra)

Electronic energy levels (UV and visible spectra)

Quantum Mechanics + Statistical Mechanics Thermochemistry ( , ,

, , ), primarily gas phase.

Potential energy surfaces (barrier heights, transition states); with a treatment of dynamics, this leads to reaction rates and mechanisms.

Ionization potentials (photoelectron and X-ray spectra)

Electron affinities

Franck-Condon factors (transition probabilities, vibronic intensities)

IR and Raman intensities

Dipole moments

Polarizabilities

Electron density maps and population analyses

Magnetic shielding tensors NMR spectra

Postulates of Quantum Mechanics 1. The state of a quantum mechanical system is completely specified by the

wavefunction . 2. To every observable in classical mechanics, there corresponds a linear, Hermitian

operator in quantum mechanics. For example, in coordinate space, the momentum

operator corresponding to momentum in the direction for a single particle is

.

Page 38: Notes on Quantum Chemistry

3. In any measurement of the observable associated with operator , the only values

that will ever be observed are the eigenvalues which satisfy . Although measurements must always yield an eigenvalue, the state does not originally have to

be in an eigenstate of . An arbitrary state can be expanded in the complete set of

eigenvectors of ( ) as , where the sum can run to

infinity in principle. The probability of observing eigenvalue is given by .

4. The average value of the observable corresponding to operator is given by

(3)

5.

6. The wavefunction evolves in time according to the time-dependent Schrödinger equation

(4)

7.

8. The total wavefunction must be antisymmetric with respect to the interchange of all coordinates of one fermion with those of another. Electronic spin must be included in this set of coordinates. The Pauli exclusion principle is a direct result of this antisymmetry principle.

Dirac Notation For the purposes of solving the electronic Schrödinger equation on a computer, it is very convenient to turn everything into linear algebra. We can represent the

wavefunctions as vectors:

(5)

9.

where is called a ``state vector,'' are the expansion coefficients (which may be

complex), and are fixed ``basis'' vectors. A suitable (infinite-dimensional) linear vector space for such decompositions is called a ``Hilbert space.''

Page 39: Notes on Quantum Chemistry

We can write the set of coefficients as a column vector,

(6)

In Dirac's ``bracket'' (or bra-ket) notation, we call a ``ket.'' What about the

adjoint of this vector? It is a row vector denoted by , which is called a ``bra'' (to spell ``bra-ket''),

(7)

10.

In linear algebra, the scalar product between two vectors and is just

(8)

11.

assuming the two vectors are represented in the same basis set and that the basis vectors are orthonormal (otherwise, overlaps between the basis vectors, i.e., the ``metric,'' must be included). The quantum mechanical shorthand for the above scalar product in bra-ket notation is just

(9)

12.

Frequently, one only writes the subscripts and in the Dirac notation, so that the

above dot product might be referred to as just . The order of the vectors

Page 40: Notes on Quantum Chemistry

and in a dot product matters if the vectors can have complex numbers for their

components, since .

Now suppose that we want our basis set to be every possible value of coordinate . Apart from giving us a continuous (and infinite) basis set, there is no formal difficulty with this. We can then represent an arbitrary state as:

(10)

13.

What are the coefficients ? It turns out that these coefficients are simply the value of the wavefunction at each point . That is,

(11)

14.

Since any two coordinates are considered orthogonal (and their overlap gives a Dirac delta function), the scalar product of two state functions in coordinate space becomes

(12)

15.

Now we turn our attention to matrix representations of operators. An operator can

be characterized by its effect on the basis vectors. The action of on a basis vector

yields some new vector which can be expanded in terms of the basis vectors so long as we have a complete basis set.

(13)

16.

If we know the effect of on the basis vectors, then we know the effect of on any

arbitrary vector because of the linearity of .

(14)

   

Page 41: Notes on Quantum Chemistry

or

(15)

17.

This may be written in matrix notation as

(16)

We can obtain the coefficients by taking the inner product of both sides of

equation 13 with , yielding

(17)

   

   

since due to the orthonormality of the basis. In bra-ket notation, we may write

(18)

where and denote two basis vectors. This use of bra-ket notation is consistent with

its earlier use if we realize that is just another vector .

It is easy to show that for a linear operator , the inner product for two

general vectors (not necessarily basis vectors) and is given by

Page 42: Notes on Quantum Chemistry

(19)

or in matrix notation

(20)

By analogy to equation (12), we may generally write this inner product in the form

(21)

18.

Previously, we noted that , or . Thus we can see also that

(22)

We now define the adjoint of an operator , denoted by , as that linear operator for which

(23)

That is, we can make an operator act backwards into ``bra'' space if we take it's adjoint. With this definition, we can further see that

(24)

or, in bra-ket notation,

(25)

19. If we pick and (i.e., if we pick two basis vectors), then we obtain

(26)

Page 43: Notes on Quantum Chemistry

 

 

But this is precisely the condition for the elements of a matrix and its adjoint! Thus

the adjoint of the matrix representation of is the same as the matrix representation

of .

This correspondence between operators and their matrix representations goes quite far, although of course the specific matrix representation depends on the choice of basis. For instance, we know from linear algebra that if a matrix and its adjoint are the same, then the matrix is called Hermitian. The same is true of the operators; if

(27)

then is a Hermitian operator, and all of the special properties of Hermitian

operators apply to or its matrix representation.

Slater Determinants An electronic wavefunction for particles must be a function of coordinates:

for each electron, we have , , and Cartesian coordinates plus a spin coordinate (sometimes designated , which can have values and ). The Cartesian

coordinates for electron are usually denoted by a collective index , and the set of

Cartesian plus spin coordinates is often denoted .

What is an appropriate form for an -electron wavefunction? The simplest solution would be a product of one-particle functions (``orbitals''):

(28)

This is referred to as a Hartree Product. Since the orbitals depend on spatial and spin coordinates, they are called spin orbitals. These spin orbitals are simply a

Page 44: Notes on Quantum Chemistry

spatial orbital times a spin function, i.e., or

.

Unfortunately, the Hartree product is not a suitable wavefunction because it ignores the antisymmetry principle (quantum mechanics postulate #6). Since electrons are fermions, the electronic wavefunction must be antisymmetric with respect to the interchange of coordinates of any pair of electrons. This is not the case for the Hartree Product.

If we simplify for a moment to the case of two electrons, we can see how to make the wavefunction antisymmetric:

(29)

The factor is just to make the wavefunction normalized (we're assuming our individual orbitals are orthonormal). The expression above can be rewritten as a determinant as

(30)

Note a nice feature of this; if we try to put two electrons in the same orbital at the

same time (i.e., set ), then . This is just a more sophisticated statement of the Pauli exclusion principle, which is a consequence of the antisymmetry principle!

This strategy can be generalized to electrons using determinants.

(31)

A determinant of spin orbitals is called a Slater determinant after John Slater. By expanding the determinant, we obtain Hartree Products, each with a different sign; the electrons electrons are arranged in all possible ways among the spin orbitals. This ensures that the electrons are indistinguishable as required by the antisymmetry principle.

Page 45: Notes on Quantum Chemistry

Since we can always construct a determinant (within a sign) if we just know the list of

the occupied orbitals , we can write it in shorthand in a

ket symbol as or even more simply as . Note that we have dropped the normalization factor. It's still there, but now it's just implied!

How do we get the orbitals which make up the Slater determinant? This is the role of Hartree-Fock theory, which shows how to use the Variational Theorem to use those orbitals which minimize the total electronic energy. Typically, the spatial orbitals are expanded as a linear combination of contracted Gaussian-type functions centered on the various atoms (the linear combination of atomic orbitals molecular orbital or LCAO method). This allows one to transform the integro-differential equations of Hartree-Fock theory into linear algebra equations by the so-called Hartree-Fock-Roothan procedure.

How could the wavefunction be made more flexible? There are two ways: (1) use a larger atomic orbital basis set, so that even better molecular orbitals can be obtained; (2) write the wavefunction as a linear combination of different Slater determinants with different orbitals. The latter approach is used in the post-Hartree-Fock electron correlation methods such as configuration interaction, many-body perturbation theory, and the coupled-cluster method.

Simplified Notation for the Hamiltonian Now that we know the functional form for the wavefunction in Hartree-Fock theory, let's re-examine the Hamiltonian to make it look as simple as possible. In the process, we will bury some complexity that would have to be taken care of later (in the evaluation of integrals).

We will define a one-electron operator as follows

(32)

and a two-electron operator as

(33)

Sometimes this is also called . Note that, in another simplification, we have begun

writing as shorthand for , and as shorthand for .

Page 46: Notes on Quantum Chemistry

Now we can write the electronic Hamiltonian much more simply, as

(34)

Since is just a constant for the fixed set of nuclear coordinates , we will ignore it for now (it doesn't change the eigenfunctions, and only shifts the eigenvalues).

Matrix Elements of the Hamiltonian and One- and Two-Electron Integrals Now that we have a much more compact notation for the electronic Hamiltonian, we need to discuss how to evaluate matrix elements of this Hamiltonian in a basis of -electron Slater

determinants . A matrix element between Slater determinants and will be

written , where we have dropped the ``el'' subscript on because we will discuss the electronic Hamiltonian exclusively from this point. Because the Hamiltonian, like most

operators in quantum mechanics, is a Hermitian operator, .

It would seem that computing these matrix elements would be extremely tedious, because

both and are Slater determinants which expand into different products of

orbitals, giving a total of different terms! However, all but of these terms go to

zero because the orbitals are orthonormal. If we were computing a simple overlap like

, all of the remaining terms are identical and cancel the in the denominator from the Slater determinant normalization factor.

If and differ, then any term which places electron in some orbital in and

a different orbital in must also go to zero unless the integration over electron is

not just a simple overlap integral like , but involves some operator

or . Since a particular operator can only affect one coordinate , all the other

Page 47: Notes on Quantum Chemistry

spin orbitals for other electrons must be identical, or the integration will go to zero (orbital

orthonormality). Hence, allows, at most, only one spin orbital to be different in and

for to be nonzero. Integration over the other electron coordinates will give factors of one, resulting in a single integral over a single set of electron coordinates

for electron , which is called a one-electron integral,

(35)

where and are the two orbitals which is allowed to be different.

Likewise, the operator allows up to two orbitals to be different in Slater determinants

and before matrix element goes to zero. Integration over other electron coordinates gives factors of one, leading to an integral over two electronic coordinates only, a two-electron integral,

(36)

There are other ways to write this integral. The form above, which places the complex conjugate terms on the left, is called ``physicist's notation,'' and is usually written in a bra-ket form.

There is a very simple set of rules, called Slater's rules, which explain how to write matrix

elements in terms of these one- and two-electron integrals. Most of quantum chemistry is derived in terms of these quantities.

Page 48: Notes on Quantum Chemistry

General Quantum Chemistry The Born-Oppenheimer Approximation (PDF) Time Evolution of Wavefunctions (PDF)

Angular Momentum Summary (PDF)

The Unitary Group Form of the Hamiltonian (PDF)

Introduction to Hartree-Fock Theory (PDF)

Permutational Symmetries of Integrals (PDF)

Notes on Excited Electronic States (PDF)

The Quantum Harmonic Oscillator (PDF)

Assigning Symmetries of Vibrational Modes (PDF)

Notes on Elementary Linear Algebra (PDF)

Notes on Density Fitting (PDF)

The Born-Oppenheimer Approximation

C. David Sherrill School of Chemistry and Biochemistry

Georgia Institute of Technology March 2005

We may write the nonrelativistic Hamiltonian for a molecule as a sum of five terms:

(1)

where refer to electrons and refer to nuclei. In atomic units, this is just

(2)

The Schrödinger equation may be written more compactly as

Page 49: Notes on Quantum Chemistry

(3)

where is the set of nuclear coordinates and is the set of electronic coordinates. If

spin-orbit effects are important, they can be added through a spin-orbit operator .

Unfortunately, the term prevents us from separating into nuclear and electronic parts, which would allow us to write the molecular wavefunction as a

product of nuclear and electronic terms, . We thus introduce the Born-Oppenheimer approximation, by which we conclude that this

nuclear and electronic separation is approximately correct. The term is large and cannot be neglected; however, we can make the dependence parametric,

so that the total wavefunction is given as . The Born-Oppenheimer approximation rests on the fact that the nuclei are much more massive than the electrons, which allows us to say that the nuclei are nearly fixed with respect to

electron motion. We can fix , the nuclear configuration, at some value , and

solve for the electronic wavefunction , which depends only parametrically on . If we do this for a range of , we obtain the potential energy curve along which the nuclei move.

We now show the mathematical details. Initially, can be neglected since

is smaller than by a factor of , where is the reduced mass of an electron. Thus for a fixed nuclear configuration, we have

(4)

such that

(5)

This is the ``clamped-nuclei'' Schrödinger equation. Quite frequently is neglected in the above equation, which is justified since in this case is just a

Page 50: Notes on Quantum Chemistry

parameter so that is just a constant and shifts the eigenvalues only by some

constant amount. Leaving out of the electronic Schrödinger equation leads to a similar equation,

(6)

(7)

For the purposes of these notes, we will assume that is included in the electronic Hamiltonian. Additionally, if spin-orbit effects are important, then these can be included at each nuclear configuration according to

(8)

(9)

Consider again the original Hamiltonian (1). An exact solution can be obtained by using an (infinite) expansion of the form

(10)

although, to the extent that the Born-Oppenheimer approximation is valid, very accurate solutions can be obtained using only one or a few terms. Alternatively, the total wavefunction can be expanded in terms of the electronic wavefunctions and a set of pre-selected nuclear wavefunctions; this requires the introduction of expansion coefficients:

(11)

where the superscript has been added as a reminder that there are multiple solutions to the Schrödinger equation.

Expressions for the nuclear wavefunctions can be obtained by inserting the expansion (10) into the total Schrödinger equation yields

Page 51: Notes on Quantum Chemistry

(12)

or

(13)

if the electronic functions are orthonormal. Simplifying further,

(14)

   

   

The last term can be expanded using the chain rule to yield

(15)

   

At this point, a more compact notation is very helpful. Following Tully [1], we introduce the following quantities:

(16)

(17)

Page 52: Notes on Quantum Chemistry

(18)

(19)

(20)

(21)

(22)

Note that equation (18) of reference [1] should not contain a factor of 1/2 as it does. Now we can rewrite equations (14) and (15) as

(23)

   

or

(24)

Page 53: Notes on Quantum Chemistry

This is equation (14) of Tully's article [1]. Tully simplifies this equation by one more

step, removing the term . By taking the derivative of the overlap of

it is easy to show that this term must be zero when the electronic wavefunction can be made real. If we use electronic wavefunctions which diagonalize the electronic Hamiltonian, then the electronic basis is called adiabatic,

and the coupling terms vanish.1 This is the general procedure. However, the equation above is formally exact even if other electronic functions are used. In some

contexts it is preferable to minimize other coupling terms, such as ; this results in a diabatic electronic basis. Note that the first-derivative nonadiabatic coupling

matrix elements are usually considered more important than the second-

derivative ones, . In most cases, the couplings on the right-hand side of the preceeding equation are small. If

they can be safely neglected, and assuming that the wavefunction is real, we obtain the following equation for the motion of the nuclei on a given Born-Oppenheimer potential energy surface:

(25)

This equation clearly shows that, when the off-diagonal couplings can be ignored, the nuclei move in a potential field set up by the electrons. The potential energy at each

point is given primarily by (the expectation value of the electronic energy), with

a small correction factor . Following Steinfeld [2], we can estimate the

magnitude of the term as follows: a typical contribution has the form

, but is of the same order as since the derivatives operate over approximately the same dimensions. The latter is

, with the momentum of an electron. Therefore

. Since

, this term is expected to be small, and it is usually dropped. However, to achieve very high accuracy, such as in spectroscopic applications, this term must be retained.

Page 54: Notes on Quantum Chemistry

The Born-Oppenheimer Diagonal Correction In a perturbation theory analysis of the Born-Oppenheimer approximation, the first-order correction to the Born-Oppenheimer electronic energy due to the nuclear motion is the Born-Oppenheimer diagonal correction (BODC),

(26)

which can be applied to any electronic state . The BODC is also referred to as the ``adiabatic correction.'' One of the first systematic investigations of this effect was a study by Handy, Yamaguchi, and Schaefer in 1986 [3]. In this work, the authors evaluated the BODC using Hartree-Fock self-consistent-field methods (and, where relevant, two-configuration self-consistent-field) for a series of small molecules. One interesting finding was that the BODC changes the singlet-triplet splitting in

methylene by 40 cm , which is small on a ``chemical'' energy scale but very relevant for a spectroscopic energy scale. Inclusion of the BODC is required to accurately model the very dense rovibrational spectrum of the water molecule observed at high energies, and these models were a critical component in proving the existence of water on the sun [4,5].

For many years, it was only possible to compute the BODC for Hartree-Fock or multiconfigurational self-consistent-field wavefunctions. However, in 2003 the evaluation of the BODC using general configuration interaction wavefunctions was implemented [6] and its convergence toward the ab initio limit was investigated for H

, BH, and H O. This study found that the absolute value of the BODC is difficult to converge, but errors in estimates of the BODC largely cancel each other so that even BODC's computed using Hartree-Fock theory capture most of the effect of the adiabatic correction on relative energies or geometries. Table 1 displays the effect of the BODC on the barrier to linearity in the water molecule and the convergence of this quantity with respect to basis set and level of electron correlation. Although the absolute values of the BODC's of bent and linear water change significantly with respect to basis set and level of electron correlation, their difference does not change much as long as a basis of at least cc-pVDZ quality is used. For the cc-pVDZ basis,

electron correlation changes the differential BODC correction by about 1 cm . Table 2 displays the effect of the BODC on the equilibrium bond lengths and

harmonic vibrational frequencies of the BH, CH , and NH molecules [7] and demonstrates somewhat larger changes to the spectroscopic constants than one might have expected, particularly for BH.

Table 1: Adiabatic correction to the barrier to linearity of water in the ground state (in cm )

Basis Method

DZ RHF 613.66 587.69 -25.97

Page 55: Notes on Quantum Chemistry

DZ CISD 622.40 596.43 -25.97

DZ CISDT 623.62 597.56 -26.06

DZ CISDTQ 624.56 598.28 -26.28

DZ CISDTQP 624.61 598.32 -26.29

DZP RHF 597.88 581.32 -16.56

cc-pVDZ RHF 600.28 585.20 -15.08

cc-pVDZ CISD 615.03 599.15 -15.88

cc-pVDZ CISDT 616.82 600.62 -16.20

cc-pVTZ RHF 596.53 581.43 -15.10

cc-pVTZ CISD 611.89 596.73 -15.16

cc-pVQZ RHF 595.57 580.72 -14.85

Data from Valeev and Sherrill [6].

The difference between the adiabatic correction

for the and structures.

Table 2: Adiabatic corrections to bond length and harmonic frequencies of BH, CH , and NH

  BH CH NH

0.00066 0.00063 0.00027

-2.25 -2.81 -1.38

Data from Temelso, Valeev, and

Sherrill [7].

On the Time Evolution of Wavefunctions in Quantum Mechanics

  Introduction

Page 56: Notes on Quantum Chemistry

The Classical Coupled Mass Problem

o Summary

Decoupling of Equations in Quantum Mechanics

o Basis Functions in Coordinate Space

o Matrix Version

o Dirac Notation Version

Introduction The purpose of these notes is to help you appreciate the connection between eigenfunctions of the Hamiltonian and classical normal modes, and to help you understand the time propagator.

The Classical Coupled Mass Problem Here we will review the results of the coupled mass problem, Example 1.8.6 from Shankar. This is an example from classical physics which nevertheless demonstrates some of the essential features of coupled degrees of freedom in quantum mechanical problems and a general approach for removing such coupling. The problem involves two objects of equal mass, connected to two different walls and also to each other by springs. Using F=ma and Hooke's Law (F=-kx) for the springs, and denoting the displacements of the two masses as x1 and x2, it is straightforward to deduce equations for the acceleration (second derivative in

time, and ):

= (1)

= (2)

The goal of the problem is to solve these second-order differential equations to obtain the functions x1(t) and x2(t) describing the motion of the two masses at any given time. Since they are second-order differential equations, we need two initial conditions for each variable, i.e.,

, and .

Page 57: Notes on Quantum Chemistry

Our two differential equations are clearly coupled, since depends not only on x1, but also

on x2 (and likewise for ). This makes the equations difficult to solve! The solution was to write the differential equations in matrix form, and then diagonalize the matrix to obtain the eigenvectors and eigenvalues.

In matrix form, we have

= (3)

where . Since this 2x2 matrix is real and symmetric, it must also be Hermitian, so we know that it has real eigenvalues, and that the eigenvectors will be linearly independent and can be made to form an orthonormal basis.

Equation 3 is a particular form of the more general equation (in Dirac notation)

 (4)

where we have picked a basis set which we will call , where

(5)

represents a unit displacement for coordinate x1, and likewise

(6)

represents a unit displacement for coordinate x2. Clearly any state of the system (x1, x2) can be written as a column vector

(7)

(as in eq. 3), which can always be decomposed into our basis as

Page 58: Notes on Quantum Chemistry

= (8)

or

(9)

Hence, eq. 3 can be considered a representation of the more general eq. 4 in the basis.

If we assume the initial velocities are zero, then we should be able to predict x1(t) and x2(t) directly from x1(0) and x2(0). Thus, we seek a solution of the form

= (10)

where G(t) is a matrix, called the propagator, that lets us get motion at future times from the initial conditions. We will have to figure out what G(t) is.

Again, the strategy is to diagonalize . The point of diagonalizing is that, as you can see from eq. 3, the coupling between x1 and x2 goes away if becomes a diagonal matrix. You can easily verify that the eigenvectors and their corresponding eigenvalues, which we will label with Roman numerals I and II, are

  (11)

  (12)

This new basis, the eigenvector basis, is just as legitimate as our original basis,

and is in fact better in the sense that it diagonalizes . So, instead of using the

basis to obtain eq. 3 from eq. 4, we can use the basis to obtain

Page 59: Notes on Quantum Chemistry

= (13)

so that now depends only on , and depends only on . The equations are

uncoupled! Note that we are now expanding the solution in the basis, so the

components in this basis are now and instead of x1 and x2:

(14)

Of course it is possible to switch between the basis and the basis. If we define our basis set transformation matrix as that obtained by making each column one of the eigenvectors of , we obtain

(15)

which is a unitary matrix (it has to be since is Hermitian). Vectors in the two basis sets are related by

  (16)

In this case, U is special because ; this doesn't generally happen. You can verify

that the matrix, when transformed into the basis via , becomes the diagonal matrix in equation 13.

The matrix equation 13 is of course equivalent to the two simple equations

= (17)

= (18)

and you can see that valid solutions (assuming that the initial velocities are zero) are

Page 60: Notes on Quantum Chemistry

= (19)

= (20)

where we have defined

= (21)

= (22)

So, the basis is very special, since any motion of the system can be decomposed

into two decoupled motions described by eigenvectors and . In other words, if the

system has its initial conditions as some multiple of , it will never exhibit any motion of

the type at later times, and vice-versa. In this context, the special vibrations described by

and are called the normal modes of the system.

So, are we done? If we are content to work everything in the basis, yes. However,

our original goal was to find the propagator G(t) (from eq. 10) in the original

basis. But notice that we already have G(t) in the basis! We can simply rewrite equations 19 and 20 in matrix form as

= (23)

So, the propagator in the basis is just

= (24)

To obtain G(t) in the original basis, we just have to apply the transformation

Page 61: Notes on Quantum Chemistry

(25)

noting that this is the reverse transform from that needed to bring from the original to the

eigenvector basis (so that U and swap). Working out was a problem in Problem Set II.

Finally, let us step back to the more general Dirac notation to point out that the general form of the solution is

(26)

and actual calculation just requires choosing a particular basis set and figuring out the

components of and and the matrix elements of operator in that basis.

Another representation of operator is clearly

 (27)

as you can check by evaluating the matrix elements in the basis to get eq. 24. Thus

=  

  =