the inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 ›...

35
The inverse scattering transform and squared eigenfunctions for the nondegenerate 3 × 3 operator and its soliton structure This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2010 Inverse Problems 26 055005 (http://iopscience.iop.org/0266-5611/26/5/055005) Download details: IP Address: 132.170.130.10 The article was downloaded on 16/04/2010 at 15:12 Please note that terms and conditions apply. View the table of contents for this issue, or go to the journal homepage for more Home Search Collections Journals About Contact us My IOPscience

Upload: others

Post on 30-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

The inverse scattering transform and squared eigenfunctions for the nondegenerate 3 × 3

operator and its soliton structure

This article has been downloaded from IOPscience. Please scroll down to see the full text article.

2010 Inverse Problems 26 055005

(http://iopscience.iop.org/0266-5611/26/5/055005)

Download details:

IP Address: 132.170.130.10

The article was downloaded on 16/04/2010 at 15:12

Please note that terms and conditions apply.

View the table of contents for this issue, or go to the journal homepage for more

Home Search Collections Journals About Contact us My IOPscience

Page 2: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

IOP PUBLISHING INVERSE PROBLEMS

Inverse Problems 26 (2010) 055005 (34pp) doi:10.1088/0266-5611/26/5/055005

The inverse scattering transform and squaredeigenfunctions for the nondegenerate 3 × 3 operatorand its soliton structure

D J Kaup and Robert A Van Gorder

Department of Mathematics, PO Box 161364, University of Central Florida, Orlando,FL 32816-1364, USA

E-mail: [email protected]

Received 30 October 2009, in final form 15 March 2010Published 15 April 2010Online at stacks.iop.org/IP/26/055005

AbstractWe develop a soliton perturbation theory for the non-degenerate 3×3 eigenvalueoperator, with obvious applications to the three-wave resonant interaction.The key elements of an inverse scattering perturbation theory for integrablesystems are the squared eigenfunctions and their adjoints. These functionsserve as a mapping between variations in the potentials and variations in thescattering data. We also address the problem of the normalization of the Jostfunctions, how this affects the structure and solvability of the inverse scatteringequations and the definition of the scattering data. We then explicitly providethe construction of the covering set of squared eigenfunctions and their adjoints,in terms of the Jost functions of the original eigenvalue problem. We also obtain,by a new and direct method (Yang and Kaup 2009 J. Math. Phys. 50 023504),the inner products and closure relations for these squared eigenfunctions andtheir adjoints. With this universal covering group, one would have tools tostudy the perturbations for any integrable system whose Lax pair contained thenon-degenerate 3 × 3 eigenvalue operator, such as that found in the Lax pair ofthe integrable three-wave resonant interaction.

1. Introduction

Here we will consider the non-degenerate 3 × 3 eigenvalue problem

∂xV − iζJ ·V = −Q ·V, J =⎡⎣J1 0 0

0 J2 00 0 J3

⎤⎦ , Q =

⎡⎣ 0 Q12 Q13

Q21 0 Q23

Q31 Q32 0

⎤⎦ (1.1)

on the interval −∞ < x < +∞. We assume that J1 > J2 > J3 and that Tr(J ) = 0. Q(x)

is a potential matrix which vanishes like Q(x → ±∞) = o(1/x) for large x. The matrix

0266-5611/10/055005+34$30.00 © 2010 IOP Publishing Ltd Printed in the UK & the USA 1

Page 3: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

V (x) is a 3 × 3 solution matrix which contains Jost solutions as its columns. This eigenvalueproblem has been used as an inverse scattering transform (IST) for the three-wave resonantinteraction (3WRI) [22, 23, 39, 40]. Here, we shall assume no symmetry for the matrix Q andwill take the six components of Q to be independent and uniquely different. Our goal will beto obtain the universal covering set for the squared eigenfunctions (SE) and adjoint squaredeigenfunctions (ASE) of this eigenvalue problem [25].

The Lax pair for this nonlinear system was first presented in 1973 [39] along with theresonant soliton solution for the explosive and decay cases. In the explosive case, it clearlyshowed that there was a nonlinear instability whereby a singularity with infinite amplitudeswould eventually develop. In 1975 and approximately at the same time, two manuscripts weresubmitted [22, 40] giving more details on this system and its solutions. The work by Kaup[22] presented a derivation of the IST for this system, giving a set of Marchenko equations,and described the solutions from the point of view of the Marchenko equations. In the workby Zakharov and Manakov [40], they constructed various soliton solutions and describedthe interactions from the point of view of these soliton solutions and the quasi-classicalapproximation. Each work made use of separable initial data for constructing the generalscattering matrix, which could be done since each wave has a different group velocity. Thefinal solution and the entire interaction could then be described in terms of the initial solitonand radiation (continuous spectra) content of each wave. In 1979, Kaup et al [23] publisha review of the 3WRI wherein more detailed aspects of these solutions were given alongwith comparisons and validations from numerical solutions. In a related work [29], Reimanextended these results to the case where the medium could contain spatial inhomogeneitiesand studied how they differed from the homogeneous solutions and how they modified thoseresults. Interestingly, Reiman showed that the inhomogeneous case of the 3WRI was also anintegrable system solvable by the same IST used for the homogeneous 3WRI. Higher order3WRI soliton solutions, particularly those corresponding to multiple zeros in the transmissioncoefficients, have recently been given in [33].

The 3 × 3 eigenvalue problem which we treat here is a special case of the more generalN × N case which has been treated in [4, 15], and the perturbation theory of the same hadalso been treated in [15] by the use of Wronskian relations and Green’s functions. Ourwork has arisen from collaborations with Gerdjikov and also with Yang, wherein the lattercollaboration resulted in the recent perturbation work on the Sasa–Satsuma equation [26, 37].The background for this approach has been discussed in recent works [25, 26] which itselfarose out of work done during the 2008 Gallipoli Workshop [25], which itself followed fromthe work in [37], whereby the perturbation theory of the Sata–Satsuma equation [31] wassimplified. From that work, it then became possible to take a wider overview and describeit in a more general setting. It is to be noted that our approach is different from those usedearlier [4, 15, 32, 42] in that by taking a set of linear combinations of the Jost functionswhich are only meromorphic functions instead of analytic, it became possible to reduce thelinear dispersion relations (LDRs) to a minimal set of three. The importance of this is that thecoefficients in these LRDs can then be immediately identified as a minimal set of scatteringdata. It also serves to define exactly what the scattering data are. The application of thisapproach will be applied herein to the general 3 × 3 eigenvalue problem. Simply stated, wehave found another approach to the problem of determining the minimal scattering data andsquared eigenfunctions, their inner products and their closure relation in a form which has notbeen adequately detailed before.

The theory of and particularly applications for the three-wave interactions continue to bean area of active research. This interaction is so generic and universal that one can expect newapplications to continue to appear. For example, Sun et al [35] have considered the three-wave

2

Page 4: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

resonant interaction and related soliton excitations of the Bogoliubov quasi-particle excited ina disk-shaped Bose–Einstein condensate. Degasperis et al [8, 12, 13] consider the resonantinteraction of three waves, motivated by their application to optical pulse propagation inquadratic nonlinear media. They found exact solutions of the three-wave resonant interactionwhen one wave was taken to be an asymptotically constant nonzero background, with the othertwo being localized pulses, allowing the three to interact and move with a common velocity(simultons). They were found to be stable when their velocity was greater than a critical value.Interactions of simultons with various localized pulses can give rise to the excitation (decay)of stable (unstable) simultons by means of the absorption (emission) of the energy carried bya localized pulse. It was found that the speed of such solitons can be continuously varied bymeans of adjusting the energy of the two bright pulses.

In [36], the adiabatic evolution of two and three resonantly interacting wave systems withnonlinear frequency and/or wave vector shifts is discussed, and a scheme of simultaneousadiabatic variation in the parameters is presented in such a way that any pair of initiallyequal energy trajectories continues to have the same energy at later times. Stenflo [34] hasconsidered the resonant three-wave interactions in various plasma situations and has givengeneral expressions for the coupling coefficients. Another application of 3WRI resonantsolitons for optical parametric amplification has been described and given in [19].

Baronio et al [3] have theoretically and experimentally investigated the spatial dynamicsof two diffractionless beams at frequencies ω1 and ω2 which mix to generate a field at the sumfrequency ω3. It is found that, depending on the intensity of the beam, when the generatedfield at ω3 can sustain a 3WRI soliton, it decays into solitons at ω1 and ω2, exactly aspredicted for the soliton decay case [22, 23, 39, 40]. The experimental findings of Baronioet al [3] thus demonstrate the possibility of reaching soliton regimes in non-diffractive 3WRIsystems. As stated in [3], such ‘ . . . nonlinear regimes could pave the way to the constructionof novel systems for storing, retrieving and processing information in the optical and plasmadomains’. Other applications include gap solitons, and Mak et al [27] have studied three-wave gap solitons in media equipped with a resonant grating, which gives rise to a strongeffective dispersion or diffraction. The grating is resonant if its spacing is commensurablewith the wavelength, which then leads to the resonant Bragg reflection of light; see [27] fordetails. Champneys and Malomed [9] found a rich spectrum of isolated solitons residing insidethe continuous radiation spectrum in a simple model of the three-wave spatial interaction in asecond-harmonic-generating planar optical waveguide equipped with a quasi-one-dimensionalBragg grating. In [10], the authors consider a new class of nonlinear optical interactions(consecutive interactions of the waves with multiple frequencies: such interactions can berealized in periodically inhomogeneous crystals; see [10] for details).

On the theoretical side, Alber et al [2] consider the geometric phases, reduction andLie–Poisson structure for the resonant three-wave interaction, in order to ‘put the three-waveinteraction in the modern setting of geometric mechanics’. Buryak et al [7] consider opticalsolitons due to quadratic nonlinearities. While the primary focus of their work involves 2 × 2systems, the authors acknowledge that the three-wave system has ‘attracted much less effort inclassification of stationary soliton families’ than the 2 × 2 systems. Such a classification hasbeen done for the 3×3 system and others [18] using classical Lie theory. There it was pointedout that there are actually two types of solitons which exist in the 3 × 3 case, which we shallbe discussing later. The major qualitative difference between three- and two-wave equationscomes from the fact that three-wave case has an additional phase symmetry. As mentioned in[7], such a property has ‘major importance in the stability theory of three-wave solitons . . . butit may also affect the general structure of stationary solitons’. In particular, it is well knownthat one type of soliton in the 3 × 3 system is invariably unstable [22, 23, 39, 40]. The

3

Page 5: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

spatio-temporal chaos in the three-wave interaction was studied in [11], while the transitionfrom coherent to incoherent three-wave interactions with increasing bandwidth is studied byRobinson and Drysdale [30]. Meanwhile, Buryak and Kivshar [6] present the families oftwo-parameter solitary waves with dispersion, for both (1 + 1)- and (2 + 1)-dimensional cases,and analyzed their stability, by deriving a novel type of analytical stability criterion for solitarywaves with more than one internal parameter.

In achieving our goal of obtaining the universal covering set of the SE and ASE forthis eigenvalue problem, we shall review its direct and inverse scattering problems, whichwill be done in sections 2 and 3, respectively. The reason for this review is that we shallconstruct the SE and ASE from the perturbations of these two problems. In this paper, weshall also reformulate and discuss the derivation as appropriate. There are four steps whichare common to all these problems. The first step is known as the direct scattering problemwhich is where one defines and analyzes what are known as Jost functions and the scatteringmatrix. Since this is a 3 × 3 system, there arises what we shall refer to as the ‘middle’ Jostfunctions. For these functions, there is the question of how best to normalize these states.There are two different and natural normalizations that one could use and each one would giverise to a different set of linear dispersion relations, as well as a different set of scattering data.These points will be discussed in the second step, the inverse scattering problem, which iswhere one determines a method for reconstructing the potentials, given the scattering data.There we demonstrate that with our choice of the normalization of the middle Jost functions,the inverse scattering equations can be reduced to a minimal set of three and that the coefficientsof this set will define a minimal set of scattering data, from which the full scattering matrixcan be reconstructed. The last steps will be to obtain the variations as either the potentialmatrix or the scattering data are perturbed. In section 4, we take up the perturbation of thedirect scattering problem, from which we will obtain the variations in the scattering data whenthe potentials are perturbed. In section 5, we will perturb the inverse scattering problem andthen take up the opposite map: the variations in the potentials that arise when the scatteringdata are perturbed. The definition of an SE is that it is an eigenstate of the linearized evolutionequations for the potentials; whence the perturbed potentials can be expanded terms of theSE. In section 6 we will use these results to construct the SE and ASE, as well as obtain theinner products between the SE and ASE and their closure relation. In the present treatment,we will include the bound-state spectra in its most general form, consistent with compactsupport. And as a disclaimer, since prior work has detailed many of its aspects, less rigorwill in general be used, except where our treatment differs from the preceding ones. For abasic reference to problems of this nature, the reader is referred to basic references such as[4, 15, 17, 28, 42].

2. The direct scattering problem

In the direct scattering problem, one addresses the solutions of the eigenvalue problem, whattheir analytical properties are, what are the adjoint solutions and their properties, what is thescattering matrix and its properties, what are the features of the bound states, if any, and whatare the fundamental analytical solutions and their adjoints, etc. Each one of these topics weshall take up below, sometimes only briefly.

The relevant matrix form of the linear eigenvalue problem associated with the 3WRIis given by (1.1). We assume no symmetry in Q and take the six components of Q to beindependent. We shall take the diagonal elements in J to be real and to satisfy J1 > J2 > J3,as was done in the 3WRI problem [22, 23, 39]. Q(x) is a potential matrix with vanishingdiagonal entries. The matrix V (x) is a 3 × 3 solution matrix which contains the Jost solutions

4

Page 6: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

as its columns. For (1.1), we shall define the Jost solutions by their asymptotics as x → ±∞.For ζ real, there are two standard sets which are

�(x → −∞) → eiζJx, �(x → +∞) = eiζJx . (2.1)

For each of these solution matrices, we have three linearly independent solutions. Therefore,each of these two solution matrices must have their columns linearly dependent on the other’scolumns. This can be expressed as

� = � · S, � = � · R, (2.2)

where S(ζ ) is the scattering matrix and R(ζ ) is its inverse. As a consequence of the Wronskianrelation, we will have S and R such that

det S = 1, det R = 1. (2.3)

In other words, the �’s and �’s satisfy the linear relationships

φj =3∑

k=1

ψkSkj , ψj =3∑

k=1

φkRkj , (2.4)

where φj is the j th column of � and similarly for ψj .For the adjoint problem, which is an equivalent problem, we have

∂xVA + iζV A · J = V A · Q, (2.5)

from which it follows

∂x(VA(x, ζ ) · V (x, ζ )) = 0. (2.6)

The solutions to the adjoint problem are called adjoint Jost functions, which are the rows ofVA. As Jost functions, they need to have a normalization, which in our case of interest we taketo be

�A(x → −∞) → e−iζJx, �A(x → +∞) = e−iζJx, (2.7)

as in [22]. Then from (2.1), (2.6) and (2.7), we have that

�A · � = I3 = �A · �, for all x and for all ζ ∈ R, (2.8)

where I3 is the 3×3 unit matrix. Whence we may construct the adjoint solutions directly fromthe inverses of the matrices of the Jost solutions. As a consequence of this and (2.2), we havethat

�A = R · �A, �A = S · �A. (2.9)

As a consequence of (2.2), (2.6), (2.8) and (2.9),

S = �A · �, R = �A · �, (2.10)

and ∂xS = 0. Further, since R = S−1 and det S = 1, we have that

R =⎡⎣S33S22 − S32S23 S32S13 − S33S12 S23S12 − S22S13

S31S23 − S33S21 S33S11 − S31S13 S21S13 − S23S11

S32S21 − S31S22 S31S12 − S32S11 S22S11 − S21S12

⎤⎦ . (2.11)

This last equation will frequently prove to be quite useful in the simplification of results weobtain in subsequent sections.

5

Page 7: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

We will use ± superscripts to indicate the regions of analyticity and subscripts to indicatethe Jost solutions, which are columns for the regular Jost solutions and rows for the adjointJost solutions. For this system, we have

� = [φ−1 , φ2, φ

+3

], � = [ψ+

1 , ψ2, ψ−3

], �A =

⎡⎣φA+

1

φA2

φA−3

⎤⎦ , �A =

⎡⎣ψA−

1

ψA2

ψA+3

⎤⎦ ,

(2.12)

where those components without a ± superscript in general only exist on the real ζ -axis.(Strictly speaking, it is not the Jost function which is analytic in ζ , uniformly for all x,but the various columns in products such as � · e−iζJx and the rows of eiζJx · �A. With thisunderstood, we shall refer to the appropriate Jost functions as being analytic in ζ if this productis so analytic.) How to generally determine the analytical properties of the Jost solutions aredetailed in the above references and textbooks such as [32, 42].

The analytical properties of the scattering matrix S follow from those of �A and �, towit:

S = �A · � =⎡⎣ψA−

1 · φ−1 ψA−

1 · φ2 ψA−1 · φ+

3

ψA2 · φ−

1 ψA2 · φ2 ψA

2 · φ+3

ψA+3 · φ−

1 ψA+3 · φ2 ψA+

3 · φ+3

⎤⎦ =

⎡⎣S−

11 S12 S13

S21 S22 S23

S31 S32 S+33

⎤⎦ . (2.13)

Similarly for R, we have that

R = �A · � =⎡⎣φA+

1 · ψ+1 φA+

1 · ψ2 φA+1 · ψ−

3

φA2 · ψ+

1 φA2 · ψ2 φA

2 · ψ−3

φA−3 · ψ+

1 φA−3 · ψ2 φA−

3 · ψ−3

⎤⎦ =

⎡⎣R+

11 R12 R13

R21 R22 R23

R31 R32 R−33

⎤⎦ . (2.14)

Following the outline in [25], in each region of analyticity, we next construct three linearlyindependent solutions of (1.1). For the real axis, those could be either � or � above, or asuitable mixture. For each half-plane, we already have two of these Jost functions. The thirdcan be construct from a linear combination of the Jost functions. How to construct themin general has been discussed in [25], following the method devised by Shabat [32]. Suchsolutions were constructed in [22]. From the linear relations (2.4) and equations (2.36) and(2.37) of [22], one found that the linear combinations

χ+2 = R+

11ψ2 − R12ψ+1 = S+

33φ2 − S32φ+3 , (2.15)

χA+2 = S+

33ψA2 − S23ψ

A+3 = R+

11φA2 − R21φ

A+1 (2.16)

would be analytic in the UHP, while in the LHP, the linear combinations

χ−2 = R−

33ψ2 − R32ψ−3 = S−

11φ2 − S12φ−1 , (2.17)

χA−2 = S−

11ψA2 − S21ψ

A−1 = R−

33φA2 − R23φ

A−3 (2.18)

would be analytic.These χ functions are solutions of the linear operator (1.1) or its adjoint. They have been

referred to as the ‘fundamental analytical solutions’ (FAS) [16]. However, at any zero of, sayR+

11, this FAS does not form a linearly independent set. That could be corrected by dividing χ+2

by R+11, but then with the consequence of the result (times e−iζJx) becoming a meromorphic

function, instead of an analytic function. That may seem to be a small difference; however,the resulting LDRs are quite different, with the meromorphic set being easily broken downinto a minimal set of three.

6

Page 8: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

Another consideration is that there are only two possible directions for inversion—onecould invert about x = +∞ or one could invert about x = −∞. If one wished to invertabout x = +∞, then it would be most convenient to use Jost functions normalized to unity atx = +∞. If one wished to invert about x = −∞, then it should be most convenient to useJost functions normalized to unity at x = −∞. In this way, one is always inverting from aknown value (the normalized values of the Jost functions at that end) toward the other end.Furthermore, since the determinant of the Jost functions (dotted with e−iζJx) is independentof x, if the trace of J is zero, one is ensured of a complete vector basis for all x. With thisprinciple in mind, if we wished to invert about x = +∞, then we should normalize χ±

2 so thatthe coefficient of ψ2 is unity. If we wished to invert about x = −∞, then we should normalizethe coefficient of φ2 to be unity. Let us note that this rule is arbitrary; however, it does giveus a set of minimal LDRs from which one can select a set of scattering data which is minimaland complete.

Let us chose to do the inversion about x = +∞, in which case, by the above rule, weshould normalize these Jost solutions (and rename them to avoid confusion) as

μ+2 = χ+

2

/R+

11 = ψ2 − R12

R+11

ψ+1 , μ−

2 = χ−2

/R−

33 = ψ2 − R32

R−33

ψ−3 , (2.19)

where the μ2 will be meromorphic instead of analytic. As a comment on notation, we shall useχ± to represent the analytic version of the Jost functions and μ± to represent the meromorphicversion. It was also this meromorphic set of Jost functions which leads to the same LDRs asgiven in the original 3WRI works [22, 23, 39, 40]. It should also be noted that the solutionmatrix used in [4] was also meromorphic.

Taking a similar normalization for the adjoints, we have

μA+2 = ψA

2 − S23

S+33

ψA+3 , μA−

2 = ψA2 − S21

S−11

ψA−1 , (2.20)

from which we can define the set of μ solutions, μ+ and μ−, and their adjoints, μA+ and μA−,as follows:

μ+ = [ψ+1 , μ+

2, φ+3

], μ− = [φ−

1 , μ−2 , ψ−

3

],

μA+ =⎡⎣φA+

1

μA+2

ψA+3

⎤⎦ , μA− =

⎡⎣ψA−

1

μA−2

φA−3

⎤⎦ .

(2.21)

From these solutions, one may then construct what we shall call the ‘fundamentalmeromorphic solutions’ (FMS) in analogy with the ‘fundamental analytical solutions’ (FAS)of Gerdjikov [16], and will designate the FMS by . They are defined by

± = μ± · e−iζJx, A± = eiζJx · μA±. (2.22)

In contrast to the μ’s, the ’s are meromorphic functions of ζ uniformly in x. Thus, +

provides us with a set of three linearly independent, meromorphic solutions in the upper-halfplane (UHP) and existing on the real ζ -axis, while − provides us with another similar set onthe real ζ -axis and in the lower-half plane (LHP). Similarly for their adjoints, A±.

On the real axis, which is the boundary between the two regions of analyticity, it followsthat μ+ and μ− will be related linearly. To obtain this relation, we will make use of the relationbetween � and μ± on the real axis, μ± = � · A±, where the A’s are the triangular matrices

A− =

⎡⎢⎢⎣

S−11 0 0

S21 1 0

S31−R32

R−33

1

⎤⎥⎥⎦ , A+ =

⎡⎢⎢⎣

1−R12

R+11

S13

0 1 S23

0 0 S+33

⎤⎥⎥⎦ . (2.23)

7

Page 9: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

In [26] it was demonstrated how to construct the following matrices from the above upper andlower triangular matrices:

μ+ = μ− · T , μ− = μ+ · T −1, T = A−1− · A+, (2.24)

where

T =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1

S−11

− R12

S−11R

+11

S13

S−11

−S21

S−11

1 +R12S21

S−11R

+11

−R23

S−11

R31

R−33

− S32

R+11R

−33

1

R−33

⎤⎥⎥⎥⎥⎥⎥⎥⎦

, T −1 =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1

R+11

− S12

R+11R

−33

R13

R+11

−R21

S+33

1 +R32S23

S+33R

−33

−S23

S+33

S31

S+33

− R32

R−33S

+33

1

S+33

⎤⎥⎥⎥⎥⎥⎥⎥⎦

. (2.25)

One observes that T and T −1 each decompose into two parts, namely

T = P − + �, T −1 = P + + �, (2.26)

where

P − =

⎡⎢⎢⎢⎢⎢⎣

1

S−11

0 0

0 1 0

0 01

R−33

⎤⎥⎥⎥⎥⎥⎦ , P + =

⎡⎢⎢⎢⎢⎣

1

R+11

0 0

0 1 0

0 01

S+33

⎤⎥⎥⎥⎥⎦ , (2.27)

� =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 − R12

S−11R

+11

S13

S−11

−S21

S−11

R12S21

S−11R

+11

−R23

S−11

R31

R−33

− S32

R+11R

−33

0

⎤⎥⎥⎥⎥⎥⎥⎥⎦

, (2.28)

� =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 − S12

R+11R

−33

R13

R+11

−R21

S+33

R32S23

S+33R

−33

−S23

S+33

S31

S+33

− R32

R−33S

+33

0

⎤⎥⎥⎥⎥⎥⎥⎥⎦

, (2.29)

and where the P ±’s are meromorphic in the appropriate half-plane while the matrices, � and�, generally only exist on the real line.

As we shall see later, the products between the μ’s and their adjoints will be importantcomponents in the solution of the perturbed inverse scattering problem. To obtain those wewill make use of the relations between �A and μA±, μA± = AA

± · �A, where these triangularmatrices are

AA− =

⎡⎢⎢⎣

1 0 0−S21

S−11

1 0

R31 R32 R33

⎤⎥⎥⎦ , AA

+ =

⎡⎢⎢⎣

R11 R12 R13

0 1−S23

S+33

0 0 1

⎤⎥⎥⎦ . (2.30)

8

Page 10: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

The possible combinations of interest are

μA+ · μ+ = AA+ · A+ = (P +)−1, μA− · μ− = AA

− · A− = (P −)−1, (2.31)

μA+ · μ− = AA+ · A− =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1−S12

R−33

R13

−R21

S+33

1 +S23R32

S+33R

−33

−S23

S+33

S31−R32

R−33

1

⎤⎥⎥⎥⎥⎥⎥⎥⎦

= I3 + (P +)−1�, (2.32)

μA− · μ+ = AA− · A+ =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1−R12

R+11

S13

−S21

S−11

1 +R12S21

R+11S

−11

−R23

S−11

R31−S32

R+11

1

⎤⎥⎥⎥⎥⎥⎥⎥⎦

= I3 + (P −)−1�. (2.33)

Equations (2.31) allow us to construct the inverses of the μ’s in terms of other fundamentalmatrices while those in (2.32) and (2.33) will be useful when we discuss perturbations.

Not surprising, we shall also have need of the adjoint relationships. These will followfrom (2.24), (2.25) and (2.31), giving

μA+ = U · μA−, μA− = U−1 · μA+, where U = (P +)−1 · T −1 · P −, (2.34)

which calculates out to be

U =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1

S−11

−S12

R−33

R13

R−33

−R21

S−11S

+33

1 +R32S23

S+33R

−33

−S23

R−33S

+33

S31

S−11

−R32

R−33

1

R−33

⎤⎥⎥⎥⎥⎥⎥⎥⎦

, U−1 =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1

R+11

−R12

R+11

S13

S+33

−S21

R+11S

−11

1 +R12S21

S−11R

+11

−R23

S−11S

+33

R31

R+11

−S32

R+11R

−33

1

S+33

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

(2.35)

The remaining part of the direct scattering problem is to detail the asymptotics of theJost functions as one approaches any essential singularity on the boundary of the region ofanalyticity. There is only one essential singularity at |ζ | = ∞ in this problem. Taking ζ to bereal, � and � have a common asymptotic expansion which is

�,� = (I3 + iB(1)/ζ + B(2)/ζ 2 + · · ·) · e−iζJx as |ζ | → ∞. (2.36)

In terms of ±, this leads to

±(|ζ | → ∞) = I3 + O(1/ζ ), (2.37)

in the appropriate half-plane. One finds that B(1) can be given by

[B(1), J ] = Q and ∂xB(1) − Q · B(1) = [B(2), J ]. (2.38)

The first equation will determine the parts of B(1) which do not commute with J (and are linearin the components of Q) while the second equation will determine those parts of B(1) that

9

Page 11: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

do commute with J. These latter parts will be spatial integrals of quadratic products of thecomponents of Q. Solving the first equation gives

B(1) =

⎡⎢⎢⎢⎢⎢⎢⎣

XQ12

J1 − J2

Q13

J1 − J3

Q21

J2 − J1X

Q23

J2 − J3

Q31

J3 − J1

Q32

J3 − J2X

⎤⎥⎥⎥⎥⎥⎥⎦

, (2.39)

where the X’s represent the part of the solution which would be obtained from the secondequation in (2.38), which we shall not need and which, as noted above, are integrals overquadratic products of the components of the potential matrix, Q.

In this section, we have discussed the direct scattering problem for (1.1). We have obtainedvarious combinations of solutions which we shall require later and we have delineated thescattering matrix S and its inverse R, as well as the FMS and their inner products and relations.Of course, Q has to satisfy certain localization conditions in order for the assumed solutionsfor S and R to exist. However, those conditions can be found to be discussed elsewhere (see,for instance, [17, 42]). Next we shall take up the inverse scattering problem, with which wecan detail the scattering data and also detail how it is related to the scattering matrix. Thestandard procedure for reconstructing Q will also be mentioned.

3. The inverse scattering problem

Let us now consider the inverse scattering problem. First, we note that equation (2.24) can beviewed as a Riemann–Hilbert problem upon using (2.25)–(2.29). Consider Cauchy’s integraltheorem applied to + in the UHP where it is analytic. Its asymptotics are given by (2.37). Itis obvious that for ζ in the UHP:

+(ζ ) −N+

11∑k=1

1

ζ +11,k − ζ

R12,k

R+′11,k

+1

(ζ +

11,k

) · [0, E12(ζ +

11,k

), 0] = 1

2I3 +

1

2π i

∫R

dζ ′

ζ ′ − ζ+(ζ ′),

(3.1)

where [a, b, c] is a row vector, R indicates that the path of the integral is along the real axis,N+

11 is the number of zeros of R+11(ζ ) in the UHP (assumed finite), ζ +

11,k is the kth zero of R+11(ζ ),

R+′11,k is the value of dR+

11(ζ )/dζ at that zero, and R12,k , in the case of compact support, is thevalue of R12 at this zero (and in the case of non-compact support, is just a coefficient). Finally,

Epq(ζ ) = exp [iζ(Jp − Jq)x]. (3.2)

We remark that the last term on the left-hand side is a consequence of choosing thenormalization of μ+

2 to be as in (2.19), wherein μ±2 (and ±

2 ) are meromorphic in general.Consequently, the entire left-hand side is that part of + which is analytic in the UHP.

Along the real axis, from (2.25), we have

+ = − · P − + − · eiζJx · � · e−iζJx, (3.3)

where we have used the fact that

[J, P ±] ≡ J · P ± − P ± · J = 0. (3.4)

Since S−11 and R−

33 are analytic in the LHP (see (2.13) and (2.14)), the first term in (3.3) can beextended into the LHP. That term will have poles wherever S−

11(ζ ) or R−33(ζ ) has zeros in the

10

Page 12: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

LHP. Whence

1

2π i

∫R

dζ ′

ζ ′ − ζ−(ζ ′) · P −(ζ ′) = 1

2I3 −

N−11∑

k=1

1(ζ−

11,k − ζ) 1

S−′11,k

−1

(ζ−

11,k

) · [1, 0, 0]

−N−

33∑k=1

1(ζ−

33,k − ζ) 1

R−′33,k

−3

(ζ−

33,k

) · [0,−R32,kE32(ζ−

33,k

), 1], (3.5)

where N−11 is the number of zeros of S−

11(ζ ) in the LHP (assumed finite), ζ−11,k is the kth zero

of S−11(ζ ), S−′

11,k is dS−11

/dζ evaluated at ζ = ζ−

11,k , N−33 is the number of zeros of R−

33(ζ ) in theLHP (assumed finite), ζ−

33,k is the kth zero of R−33(ζ ), R−′

33,k is dR−33

/dζ evaluated at ζ = ζ−

33,k

and R32,k is a coefficient, which if on compact support, is equal to R32(ζ−

33,k

)and otherwise is

just a constant. Note that the middle column of the last term is a consequence of any poles inμ−

2 , as in (3.1). The first term in (3.5) comes from the integral along an infinite semi-circle inthe LHP. Note that the above results are valid even if S−

11 and R−33 have common zeros.

Putting (3.1), (3.3) and (3.5) together gives, for ζ in the UHP,

+(ζ ) −N+

11∑k=1

1

ζ +11,k − ζ

R12,k

R+′11,k

+1

(ζ +

11,k

) · [0, E12(ζ +

11,k

), 0]

= I3 +1

2π i

∫R

dζ ′

ζ ′ − ζ−(ζ ′) · eiζ ′Jx · �(ζ ′) · e−iζ ′Jx

−N−

11∑k=1

1(ζ−

11,k − ζ) 1

S−′11,k

−1

(ζ−

11,k

) · [1, 0, 0]

−N−

33∑k=1

1(ζ−

33,k − ζ) 1

R−′33,k

−3

(ζ−

33,k

) · [0,−R32,kE32(ζ−

33,k

), 1], (3.6)

where R12,k is a coefficient, which if on compact support, is equal to R12(ζ +

11,k

)and otherwise

is just a constant. Thus, we have the analytical part of + given in terms of − on the realaxis and at each zero of S−

33 and R−11 in the LHP.

Similarly, starting from the second part of (2.24), we obtain, for ζ in the LHP,

−(ζ ) −N−

11∑k=1

1

ζ−33,k − ζ

R32,k

R−′33,k

−3

(ζ−

33,k

) · [0, E32(ζ−

33,k

), 0]

= I3 − 1

2π i

∫R

dζ ′

ζ ′ − ζ+(ζ ′) · eiζ ′Jx · �(ζ ′) · e−iζ ′Jx

−N+

11∑k=1

1(ζ +

11,k − ζ) 1

R+′11,k

+1

(ζ +

11,k

) · [1,−R12,kE12(ζ +

11,k

), 0]

−N+

33∑k=1

1(ζ +

33,k − ζ) 1

S+′33,k

+3

(ζ +

33,k

) · [0, 0, 1], (3.7)

where N+33 is the number of zeros of S+

33(ζ ) in the UHP (assumed finite), ζ +33,k is the kth zero

of S+33(ζ ) and S+′

33,k is dS+33

/dζ evaluated at ζ = ζ +

33,k . Similarly N+11 is the number of zeros

of R+11(ζ ) in the UHP (assumed finite), ζ +

11,k is the kth zero of R+11(ζ ) and R+′

11,k is dR+11

/dζ

evaluated at ζ = ζ +11,k . Thus, we have the analytical part of − given in terms of + on the

11

Page 13: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

real axis and at each zero of R+33 and S+

11 in the UHP. Again, this result is valid even if R+11 and

S+33 have common zeros.

As remarked earlier, the separation of T and T −1 into the �’s and P’s matrices is a naturalseparation of T and its inverse into their contributions to the continuous spectra (�’s) and thediscrete spectra (P ±). In fact this separation basically determines the continuous spectra andthe discrete spectra.

If we were to attempt to solve (3.6) and (3.7), we would be required to have the various 14reflection coefficients found in � and � as well as the discrete scattering data. However, thereare only six independent components of the potential matrix, whence eight of these reflectioncoefficients must be redundant. Furthermore, there can only be three linearly independentcolumn vector solutions of (1.1) while (3.6) and (3.7) contain six column vector quantities. Sothis system of equations is overdetermined. So one needs to determine which three of thesesix equations may be taken to be independent of the others.

As stated earlier, we chose to obtain the inverse scattering equations for inversion about+∞, which implies that we should take the ψ’s as the basis. According to (2.21) and (2.22),we then should toss the third column in (3.6) and the first column in (3.7). We should alsoattempt to eliminate all φ terms, such as −

1 and +3 . These can be eliminated by the use of

(2.4). For the moment, we will carry along both middle columns.Eliminating −

1 from the first column of (3.6), for ζ in the UHP, gives

+1(ζ ) =

⎡⎣1

00

⎤⎦− 1

2π i

∫R

dζ ′

ζ ′ − ζ

[S21

S−11

ψ2 e−iJ1ζ′x +

S31

S−11

−3 E31(ζ

′)]

−N−

11∑k=1

1(ζ−

11,k − ζ) 1

S−′11,k

−1

(ζ−

11,k

), (3.8)

where under the integral, we have eliminate −2 in favor of ψ2, since it is only required on

the real axis. The second column in this equation also contains the term −1 , which upon

elimination gives

+2(ζ ) =

⎡⎣0

10

⎤⎦− 1

2π i

∫R

dζ ′

ζ ′ − ζ

[R12

R+11

+1E12(ζ

′) − R32

R−33

−3 E32(ζ

′)]

+N+

11∑k=1

1

ζ +11,k − ζ

R12,k

R+′11,k

+1

(ζ +

11,k

)E12(ζ +

11,k

)

+N−

33∑k=1

1(ζ−

33,k − ζ) R32,k

R−′33,k

−3

(ζ−

33,k

)E32(ζ−

33,k

). (3.9)

Now we similarly handle the second and last columns of (3.7), obtaining

−2 (ζ ) =

⎡⎣0

10

⎤⎦− 1

2π i

∫R

dζ ′

ζ ′ − ζ

[R12

R+11

+1E12(ζ

′) − R32

R−33

−3 E32(ζ

′)]

+N+

11∑k=1

1

ζ +11,k − ζ

R12,k

R+′11,k

+1

(ζ +

11,k

)E12(ζ +

11,k

)

+N−

33∑k=1

1(ζ−

33,k − ζ) R32,k

R−′33,k

−3

(ζ−

33,k

)E32(ζ−

33,k

), (3.10)

12

Page 14: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

and

−3 =

⎡⎣0

01

⎤⎦ +

1

2π i

∫R

dζ ′

ζ ′ − ζ

[S13

S+33

+1E13(ζ

′) +S23

S+33

ψ2e−iJ3ζ′x]

−N+

33∑k=1

1(ζ +

33,k − ζ) 1

S+′33,k

+3

(ζ +

33,k

), (3.11)

where the value of ψ2 is again only required on the real axis. We observe that the integrands ofthe integrals in (3.9) and (3.10) are equal and oppose in sign while the discrete contributionsin each are identical. Thus, these middle columns are actually linearly dependent. From theirdifference, we have the relationship

+2 − −

2 = R32

R−33

−3 E32(ζ ) − R12

R+11

+1E12(ζ ), �(ζ ) = 0, (3.12)

which, according to (2.19), is an identity. Whence (3.9) and (3.10), for �(ζ ) = 0, are linearlydependent. Either one may be used to obtain the value of ψ2 on the real axis, by use of (2.19).

Let us now consider the bound-state contributions. First, note that (3.8) requires −1

(ζ−

11,k

)while (3.11) requires +

3

(ζ +

33,k

), each of which is a � state. So we need these expressed in

terms of � states. As discussed in [25], in general one may assume compact support in orderto obtain the necessary relations concerning the linear dependence of bound states. (When onedoes not have compact support, see [4].) However, all that information is readily containedin (2.24), upon assuming compact support, setting the appropriate transmission coefficient tozero and making use of the fact that all Jost functions would now be entire functions (exceptfor μ±

2 , as noted before). To illustrate this, from (2.24), at a zero of S−11, from the first column

of the first equation, we have that

−1

(ζ−

11,k

) = S21,k−2

(ζ−

11,k

)E21(ζ−

11,k

), (3.13)

where S21,k is a coefficient in general, and if under compact support, is equal to S21(ζ−

11,k

).

(The second and third columns provide exactly the same information, once one recognizes thataccording to (2.11), R23 = S21 S13 whenever S−

11 = 0.) We can obtain the value of −2

(ζ−

11,k

)from (3.10) upon setting ζ = ζ−

11,k .Similarly, again from (2.24), at a zero of S+

33, the third column of the last equation gives

+3

(ζ +

33,k

) = S23,k+2

(ζ +

33,k

)E23(ζ +

33,k

), (3.14)

where S23,k is a coefficient in general, and if under compact support, is equal to S23(ζ +

33,k

). We

can obtain the value of +2(ζ

+33,k) from (3.9) upon setting ζ = ζ +

33,k .Now putting all this together and using (2.19), equations (3.8) and (3.9) become, for

�(ζ ) � 0,

+1(ζ ) =

⎡⎣1

00

⎤⎦− 1

2π i

∫R

dζ ′

ζ ′ − ζ

×[S21

S−11

(−

2 (ζ ′)E21(ζ′) +

R32

R−33

−3 (ζ ′)E31(ζ

′))

+S31

S−11

−3 (ζ ′)E31(ζ

′)]

−N−

11∑k=1

1(ζ−

11,k − ζ) S21,k

S−′11,k

−2

(ζ−

11,k

)E21(ζ−

11,k

), (3.15)

13

Page 15: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

+2(ζ ) =

⎡⎣0

10

⎤⎦− 1

2π i

∫R

dζ ′

ζ ′ − ζ

[R12

R+11

+1(ζ

′)E12(ζ′) − R32

R−33

−3 (ζ ′)E32(ζ

′)]

(3.16)

+N+

11∑k=1

1

ζ +11,k − ζ

R12,k

R+′11,k

+1

(ζ +

11,k

)E12(ζ +

11,k

)

+N−

33∑k=1

1(ζ−

33,k − ζ) R32,k

R−′33,k

−3

(ζ−

33,k

)E32(ζ−

33,k

), (3.17)

while (3.10) and (3.11) become, for �(ζ ) � 0,

−2 (ζ ) =

⎡⎣0

10

⎤⎦− 1

2π i

∫R

dζ ′

ζ ′ − ζ

[R12

R+11

+1(ζ

′)E12(ζ′) − R32

R−33

−3 (ζ ′)E32(ζ

′)]

+N+

11∑k=1

1

ζ +11,k − ζ

R12,k

R+′11,k

+1

(ζ +

11,k

)E12(ζ +

11,k

)

+N−

33∑k=1

1(ζ−

33,k − ζ) R32,k

R−′33,k

−3

(ζ−

33,k

)E32(ζ−

33,k

), (3.18)

−3 =

⎡⎣0

01

⎤⎦ +

1

2π i

∫R

dζ ′

ζ ′ − ζ

×[S13

S+33

+1(ζ

′)E13(ζ′) +

S23

S+33

(+

2(ζ′)E23(ζ

′) +R12

R+11

+1(ζ

′)E13(ζ′))]

−N+

33∑k=1

1(ζ +

33,k − ζ) S23,k

S+′33,k

+2

(ζ +

33,k

)E23(ζ +

33,k

). (3.19)

In addition to the above singular integral equations, one would also have to include thosefor the bound-state quantities, such as (3.15) evaluated at ζ = ζ +

11,k , (3.17) evaluated atζ = ζ +

33,k , (3.18) evaluated at ζ = ζ−11,k and (3.19) evaluated at ζ = ζ−

33,k . This total set ofnonhomogeneous, linear, algebro-singular integral equations is referred to as a minimal set ofLDRs, with it being understood that either (3.17) or (3.18) may tossed.

Once we have the LDRs in the above form, it becomes possible to define the scatteringdata for this problem. To solve these equations, we find that we must specify the quantities:

• the reflection coefficients σj1 = Sj1

S−11

(j = 2, 3), σj3 = Sj3

S+33

(j = 1, 2), ρ12 = R12R+

11and

ρ32 = R32

R−33

on the real axis,

• the zeros of R+11(ζ ) in the UHP

(ζ +

11,k; k = 1, 2, . . . , N+11

)and the values of C12,k = R12,k

R+′11,k

at each such zero,• the zeros of S+

33(ζ ) in the UHP(ζ +

33,k; k = 1, 2, . . . , N+33

)and the values of C23,k = S23,k

S+′33,k

at each such zero,• the zeros of R−

33(ζ ) in the LHP(ζ−

33,k; k = 1, 2, . . . , N−33

)and the values of C32,k = R32,k

R−′33,k

at each such zero,

14

Page 16: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

• the zeros of S−11(ζ ) in the LHP

(ζ−

11,k; k = 1, 2, . . . , N−11

)and the values of C21,k = S21,k

S−′11,k

at each such zero.

Note that we have six reflection coefficients but only four sets of eigenvalues andnormalization coefficients. Thus, there are two reflection coefficients, σ13 and σ31, which,even with compact support, will not be associated with any bound states.

Provided that a solution exists, from these data one may reconstruct the potentials. Thiswill be accomplished by first solving the appropriate equations (3.15)–(3.19) in conjunctionwith the resulting algebraic equations for the bound-state FMS. Then using the asymptoticrelations (2.36) and (2.39), one can obtain the potentials from the asymptotics of those ’s,which then completes the solution of the inverse scattering problem. For further details andproofs of the existence of solutions, see [4, 5, 15].

3.1. Soliton structure in the 3 × 3 case

Let us consider the soliton structure of this eigenvalue problem, where we have two types ofsolitons. Here we shall not discuss the continuous spectra and will only consider the bound-state spectrum. As pointed out in [18], there can be sl(2) solitons (ordinary AKNS solitons)and there can also be sl(3) solitons (resonant solitons). To understand these, let us restrictour considerations to the usual case for most physical systems, which is where one has somesymmetry between the complex conjugate (cc) of Q and the transpose of Q, which we taketo be Q∗

jk = −Qkj . In this case one can readily show that the zeros of R+11 will be the cc of

those of S−11 and the zeros of R−

33 will be the cc of those of S+33. Thus, we only need to consider

the zeros in the UHP, namely those of R+11 and S+

33. As a point of reference, this reductioncorresponds to the AKNS case of r = −q∗ for the three independent potentials.

The first observation is that the eigenvalue problem (1.1) contains the AKNS eigenvalueproblem as three different subcases. Consider the following matrices, where X representssome nonzero quantity:⎡⎣0 X 0

X 0 00 0 0

⎤⎦ ,

⎡⎣0 0 0

0 0 X

0 X 0

⎤⎦ ,

⎡⎣0 0 X

0 0 0X 0 0

⎤⎦ ,

⎡⎣0 X X

X 0 X

X X 0

⎤⎦ . (3.20)

For the first subcase, let Q(x) have the structure where the only nonzero components are Q12

and Q21, as in the first matrix in (3.20). (Note that in these first three cases, the off-diagonalcomponents of S and R will be of the same form as for Q(x)). It then follows that S+

33 = 1so that any bound state will only occur when R+

11(ζ ) = 0 in the UHP. This now becomes anAKNS problem for the first two components of (1.1), while the third component becomessimply a trivial component. Any zero in R+

11(ζ ) will give rise to an ordinary AKNS soliton(sl(2) soliton) and they will be present only in Q12 (and Q21).

For the second case, let Q have the structure where the only nonzero components areQ23 and Q32, as in the second matrix. We then have R+

11 = 1 so that any bound states willonly occur when S+

33(ζ ) = 0 in the UHP. This is another AKNS problem for the last twocomponents, with the first component now becoming a trivial component. Any zero in S+

33(ζ )

will again give rise to an ordinary AKNS soliton and they will be present only in Q23 (andQ32).

Now consider the third subcase wherein Q(x) is of the form of the third matrix in (3.20).Here we have S22 = 1 = R22. It then follows from (2.11) that S+

33(ζ ) = R+11(ζ ) for all ζ in

the UHP, since the non-diagonal components of S and R must match the last matrix in (3.20).Consequently, this subcase demands that the corresponding zeros of this pair be exactly equal.When this is so, for each set of matched zeros, we will find a single AKNS soliton in Q13 (and

15

Page 17: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

Q31). But note that here it requires a common zero in two different components of R and S inorder to have one soliton. For the moment, let us note that if we only had the zero in R+

11(ζ ), itwould produce a soliton only in Q12. Similarly, if we only had the zero in S+

33(ζ ), it would giverise to a soliton only in Q23. So there is something else here at work. Note that S+

33(ζ ) must beexactly the same function of ζ as R+

11(ζ ) is. This could be mathematically required, but if onewere to solve (1.1) for some arbitrary initial data, such would almost never occur. Anotherconsequence of this equality of S+

33(ζ ) = R+11(ζ ) is that the two solitons would also be found

located at the same spatial position. Further, these two solitons, one in Q12 and one in Q23,would have exactly the same spatial structure. Thus, they are located at the same place and are‘spatially coherent’ with respect to each other. Then since this is a nonlinear system, when weattempt to place both these solitons at the same place and with the same spatial structure, whateffectively happens is that they totally cancel each other, as if there were some destructiveoptical interference at work. But as in destructive and constructive optical interferences, theenergy always shows up elsewhere, and in this case, it shows up as an AKNS soliton in Q13.So, this third subcase is distinctly different from the other two.

Now, let us look at this more generally, and in particular, the subcase of the last matrixin (3.20), which is the general case. This is the case where the sl(3) soliton occurs. Thenonlinear interference effect depends critically on the two positions and the two paired zerosbeing equal. If the zeros were only close but not exactly equal, then the spatial structure of thetwo solitons in Q12 and Q23 would not be exactly the same, and consequently, total destructiveinterference would not occur. Some parts of the two AKNS solitons, the one in Q12 andone in Q23, would remain, while the soliton in Q13 would now be smaller than otherwise. Thesame is true if the positions of these two solitons were not the same—they would not fullyoverlap and the amount of the destructive interference would be reduced by the amount of thenon-overlap. The general solution for this situation is given in [22], equation (7.5). From thissolution, one can see each of the above consequences occurring. (Note that the eigenvalueproblem used in that reference, equations (2.1)–(2.3) of that reference, has a different gaugefrom the eigenvalue problem, (1.1), used here, as well as a slightly different notation. Due tothis, one needs to pay attention to equation (7.1) in that reference. The case of equal zerosdiscussed here corresponds to the case where ξ1 = ξ3 and η1 = η3 in [22].)

To summarize, this latter soliton solution is the sl(3) soliton [18] mentioned above. Sinceit involves a resonance, we will also refer to it as a ‘resonant soliton’. It generally has a partappearing in Q12, another part in Q23 and another part in Q13. It has degenerate limits. Bysetting the appropriate normalization coefficient to zero, this solution reduces to an AKNSsoliton in either Q12 or Q23. So each of those AKNS solitons could be viewed as degenerateexamples of the sl(3) soliton. If we take S+

33(ζ ) = R+11(ζ ), then we obtain another degenerate

limit which is the AKNS soliton in Q13. But this latter case is not really the same as the firsttwo since in order for it to exist in its purity, it is necessary to have two paired zeros andnormalization coefficients in two different functions.

Various aspects of this nonlinear resonance phenomenon show up in time evolutions ofthe 3WRI. The creation of a resonant soliton is illustrated in figure 11 of [23]. There thetwo solitons collide, and a structure grows in the middle, eventually becoming the resonantsoliton seen in the middle. The resident oscillations seen are due to the continuous spectra,which interact differently. Although one can create these resonant solitons, they are generallyunstable, just like an upside–down pendulum is unstable. Figures 6 and 7 of [23] illustratethe instability of an isolated resonant soliton. As another example of its instability, if the twozeros were not exactly equal, one would have a configuration as in figure 12 of [23]. Herethe initial pulses are such that the two zeros are not equal, but they are close. Although theresonant soliton does attempt to form in the interaction region, as it emerges, being unstable

16

Page 18: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

and not fully formed, it then decays back into the original two AKNS solitons in the other twowaves. Other examples of this resonant behavior can also be seen in [8, 12, 13].

Now for some closing comments on the inverse scattering problem: first, we obtainthe inverse scattering equations, (3.6) and (3.7), by the use of the FMS and the relationshipshown in (2.24). Second, the separation of the matrix T and its inverse into different analyticparts, as in (2.25)–(2.29), is essential if one is to close the contour in the opposite half-plane.Third, if one decomposes (3.6) and (3.7) into its various columns, one recognizes that theseequations contain both inversion procedures (about +∞ or −∞) and therefore one-half ofthese equations must be linearly dependent on the others, but in a way that was not expected.Fourth, for ζ real, the two middle columns are linearly dependent which means that there areonly five independent column vector equations among the six. However, these columns couldbe equally well expressed in terms of either the φ’s or the ψ’s, as in (2.15) and (2.17). Thus,the linearly independent part of the two middle columns can be included into the set for eitherdirection of inversion. Meanwhile, the four outer columns can be naturally split into two setsof two; one set for the φ’s and one for the ψ’s.

4. Variations in scattering data

In the previous sections, we have described the solution of the direct scattering problem and theinverse scattering problem. In the direct scattering problem, given potentials which are suitablyrestricted, the scattering coefficients exist and are unique. In the inverse scattering problem,given a set of scattering data (as that which was itemized at the end of section 3), there usuallyis a unique potential which can be recovered from these scattering data. (Necessary conditionsfor the existence of solutions of even the AKNS are not known [1], although some sufficientconditions are known.) We shall now assume that solutions exist in both cases, and that fora given potential and its associated scattering data, there will be neighborhoods surroundingsuch sets where the same will be found to occur. Whence for any linear perturbation ofthe potential, there will exist a unique linear variation in the scattering data and vice versa.So the first task here will be to determine these relationships between the variations and theperturbations, such that the variation may be given in terms of the perturbation. It is fromthese two relationships that one can obtain what are known as the squared eigenfunctions (SE)and their adjoints (ASE), along with their inner products and closure relations.

The approach presented here will differ from the original approach [20, 21] which we willbriefly outline. In that approach, one first found the variations in the scattering data in termsof perturbations of the potentials. The coefficients of these were squares of the Jost functionsand were the adjoints of the SE. It was then shown that these adjoints were eigenfunctions ofan integro-differential operator. Then one found the adjoint of that operator, which would bethe eigenvalue operator for the SE. Now by guess and trial, one found that its eigenfunctionswere also products of Jost functions and their adjoints. These products were what are calledthe SE. Then inner products between the SE and ASE were defined and explicitly evaluatedfrom the various asymptotics of the Jost functions. Once these inner products were knownn,one could construct what should be the closure relation, by simply expanding an arbitraryfunction in these SE. But to prove closure, a rather long process was used, which required theuse of the Marchenko equations, which are obtained from the LDRs. A much neater proofwas given later by Gerdjikov and Khristov [14] based on a Green’s function approach.

Here we shall take a different approach (see [24, 37]). As a prelude to this, let usmake the following remarks. Determining the scattering data is accomplished by solving theeigenvalue problem. Thus, applying perturbations to the potentials in the eigenvalue problemwill allow one to compute the resulting variations in the scattering data. Similarly, one solves

17

Page 19: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

the inverse scattering problem by starting from (2.24). From that, one obtains the LDRs, (3.6)and (3.7), by which one takes the scattering data and reconstructs the potential. Whence ingoing from perturbations of the scattering data to the resulting variations of the potential, weshould likewise expect to start from (2.24), perturb it by applying arbitrary perturbations tothe scattering data and then obtain the resulting variations in the potentials. This is the basisfor the approach [41] which we shall now use.

In this section, we will calculate the variations in the scattering data due to the perturbationsof the potential. The solution procedure for this is well known and quite direct. One perturbs(2.1), and then uses the method of variation of parameters to solve the resulting differentialequation. One obtains

∂x(VA · δV ) = −V A · δQ · V, (4.1)

where V is any solution, δV is the variation in V resulting from perturbations in δQ. VA is anyadjoint solution. Integrating this and letting V = � and V A = �A, since δV (x → −∞) = 0and δV (x → +∞) = eiζJx · δS, we have

δS = −∫ ∞

−∞�A · δQ · � dx. (4.2)

Similarly if we take V = � and V A = �A, we obtain

δR =∫ ∞

−∞�A · δQ · � dx (4.3)

from which one may proceed to calculate all perturbations in the scattering data.Let us define

D±i,j (ζ ) = −

∫ ∞

−∞dx[μA±

i (x, ζ ) · δQ(x) · μ±j (x, ζ )

], i, j = 1, 2, 3. (4.4)

Then for the reflection coefficients, for real ζ , we obtain

δσ21 = 1

S−11

D−2,1(ζ ), (4.5)

δσ31 + ρ32δσ21 = 1

S−11R

−33

D−3,1(ζ ), (4.6)

δσ13 + +ρ12δσ23 = 1

S+33R

+11

D+1,3(ζ ), (4.7)

δσ23 = 1

S+33

D+2,3(ζ ), (4.8)

δρ12 = −1

R+11

D+1,2(ζ ), (4.9)

δρ32 = −1

R−33

D−3,2(ζ ). (4.10)

For the bound-state eigenvalues and normalization coefficients, one works with Taylorexpansions about any given zero of R−

33, S−11, R+

11 or S+33. Consider a general analytical function

of ζ , say g(ζ ). Expanding it in a Taylor series about some such zero, ζk , we have

g(ζ ) = gk + g′k(ζ − ζk) + 1

2g′′k (ζ − ζk)

2 + · · · , (4.11)

18

Page 20: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

where the primes indicate differentiation with respect to ζ and the subscript k’s indicateevaluation at ζ = ζk . Varying the quantities in the above expression gives us

δg(ζ ) = δ(gk) − g′kδζk + [δ(g′

k) − g′′k δζk](ζ − ζk) + · · · , (4.12)

from which we have

δ(gk) = [δg(ζ )]k + g′kδζk, δ(g′

k) = {∂ζ [δg(ζ )]}k + g′′k δζk, . . . . (4.13)

In other words, since there is also a shift in the eigenvalues, one also has to shift where thequantity is evaluated. To get the shift in the eigenvalues, one applies the above to the functionsS−

11(ζ ), R−33(ζ ), S+

33(ζ ) and R+11(ζ ), requiring δ

[S−

11(ζk)] = 0, etc, which gives

δζ−11,k = − (δS−

11

)k(

S−′11

)k

= −1(S−′

11

)k

D−1,1

(ζ−

11,k

), (4.14)

δζ−33,k = − (δR−

33

)k(

R−′33

)k

= 1(R−′

33

)k

D−3,3

(ζ−

33,k

), (4.15)

δζ +11,k = − (δR+

11

)k(

R+′11

)k

= 1(R+′

11

)k

D+1,1

(ζ +

11,k

), (4.16)

δζ +33,k = − (δS+

33

)k(

S+′33

)k

= −1(S+′

33

)k

D+3,3

(ζ +

33,k

). (4.17)

Turning to the normalization coefficients, we first vary the denominators. From (4.12),for g(ζ ) = S−

11, upon replacing the Jost functions with the appropriate μ’s, we have

δ(S−′

11,k

) = S−′′11,kδζ

−11,k +

[∂ζD

−1,1(ζ )

]k. (4.18)

Similarly, we find that

δ(R−′

33,k

) = R−′′33,kδζ

−33,k − [∂ζD

−3,3(ζ )

]k, (4.19)

δ(R+′

11,k

) = R+′′11,kδζ

+11,k − [∂ζD

+1,1(ζ )

]k, (4.20)

δ(S+′

33,k

) = S+′′33,kδζ

+33,k +

[∂ζD

+3,3(ζ )

]k. (4.21)

The variations in the numerators of the normalization coefficients are a bit more difficultto obtain since we have to move off the real ζ -axis. A general procedure for determining thevariations in these coefficients has been outlined in [25, 26] whereby one starts by assumingcompact support. The basic idea is that if the normalization coefficient is associated with azero in a half-plane, then its variations must be expressed in terms of Jost functions analyticallyextendible into that half-plane, in which case, the result becomes extendable to the non-compactsupport case. To illustrate this, consider the normalization coefficient C12,k = R12,k

/R+′

11,k .From (2.15), (4.3) and (4.12), the variation of its numerator is

δ(R12,k) = −[

1

R+11

∫ ∞

−∞μA+

1 · δQ · (χ+2 + R12μ

+1

)dx

]k

+ R′12,kδζ

+11,k. (4.22)

Now, according to (2.15), at any zero of R+11 we have

R12,kμ+1,k = −χ+

2,k, (4.23)

whence both the denominator and the numerator in (4.22) vanishe at the zero. Thus, we mustuse the L’Hopital’s rule in evaluating the limit. Doing so and expanding the integrand gives

δ(R12,k) = −1

R+′11,k

[∂ζ

∫ ∞

−∞μA+

1 · δQ · χ+2 dx

]k

− C12,k

[∂ζ D

+1,1(ζ )

]k. (4.24)

19

Page 21: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

Then from (4.20), (4.24) and (4.23), we obtain

δ(C12,k) = −1

R+′11,k

{∂ζ

[R+

11D+1,2(ζ )

R+′11

]}k

. (4.25)

Note that the product R+11 D+

1,2 in the numerator removes the singularity in μ+2 at the zero of

R+11. Similarly for the other three cases, we find that

δ(S23,k) = 1

S+′33,k

[∂ζ

(S+

33D+2,3

)]k

+ C23,k

[∂ζ

(D+

3,3

)]k, and S23,kμ

A+3,k = −χA+

2,k , (4.26)

δ(R32,k) = −1

R−′33,k

[∂ζ

(R−

33D−3,2

)]k− C23,k

[∂ζ

(D−

3,3

)]k, and R32,kμ

−3,k = −χ−

2,k, (4.27)

δ(S21,k) = 1

S−′11,k

[∂ζ

(S−

11D−2,1

)]k

+ C21,k

[∂ζ

(D−

1,1

)]k, and S21,kμ

A−1,k = −χA−

2,k , (4.28)

which then leads to

δC23,k = 1

S+′33,k

{∂ζ

[S+

33D+2,3(ζ )

S+′33

]}k

, (4.29)

δC21,k = 1

S−′11,k

{∂ζ

[S−

11D−2,1(ζ )

S−′11

]}k

. (4.30)

δC32,k = −1

R−′33,k

{∂ζ

[R−

33D−3,2(ζ )

R−′33

]}k

. (4.31)

With this, we have determined the variations in the scattering data for inversion about +∞. Forinversion about −∞, one would proceed as above for the corresponding components. Nextwe shall derive the variations in the potentials when one perturbs the scattering data.

5. Variations of potentials

To obtain the inverse of the above relationships, we will follow the approach used in[24, 37], which is based on equation (2.24). Accordingly, we need to construct a matrixwhich would contain the six reflection coefficients contained in the scattering data (listed atthe end of section 3). However, looking at (2.32) and (2.33), one notes that half the reflectioncoefficients we require are found in one matrix and the other half in the other. On the otherhand, as shown in the appendix, we know that given these six reflection coefficients on the realζ -axis, and the zeros of R+

11, S+33, S−

11 and R−33 in the appropriate half-plane, one can construct

all components of R and S on the real ζ -axis. So the matter is simply the one of constructingsome matrix which contains these six reflection coefficients.

To do this, we start with (2.32), and replace all off-diagonal components of S and R withthe values found in (A.1), (A.3) and (A.9)–(A.14). One obtains

μA+ · μ− =

⎡⎢⎢⎣

1 −ρ21σ23ρ31 + σ13ρ32 + ρ12

β3S−11

−σ13 + ρ12σ23

β3S−11

−S−11 (σ21 − σ23σ31) 1 + ρ12σ23 −σ23

S−11σ31 −ρ32 1

⎤⎥⎥⎦ . (5.1)

For the purposes we require, we need a matrix which is a product of a matrix meromorphicin the UHP which is multiplied by a matrix meromorphic in the LHP, and with the resulting

20

Page 22: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

matrix being a function only of the six reflection coefficients. As one can see, (5.1) is almostthere and all we need to do is to eliminate the factors of S−

11 and β3 in some manner. Thiscan be achieved if we multiply (5.1) from the left by a diagonal matrix meromorphic in theUHP, and on the right by a diagonal matrix meromorphic in the LHP. Inspection reveals thatthe matrices

M+ =

⎡⎢⎢⎣

1

R+11

0 0

0 1 00 0 1

⎤⎥⎥⎦ , M− =

⎡⎢⎢⎣

1

S−11

0 0

0 1 00 0 1

⎤⎥⎥⎦ (5.2)

will serve this purpose well. Due to (A.7) and (A.10), we obtain

G = M+ · μA+ · μ− · M−

=⎡⎣1− σ1,3σ3,1 + ρ1,2(σ2,1 − σ2,3σ3,1) −(ρ21σ23ρ31 + σ13ρ32 + ρ12) −(σ13 + ρ12σ23)

−(σ21 − σ23σ31) 1+ ρ12σ23 −σ23

σ31 −ρ32 1

⎤⎦ .

(5.3)

Using (2.22), let us define

F + = M+ · A+, F− = − · M−, (5.4)

each of which is meromorphic in the appropriate half-plane. Recall that there are possiblepoles contained in μA+

2 and μ−2 . With this, from (5.3), we obtain

F + · F− = eiζJx · G · e−iζJx . (5.5)

If we now vary the quantities in (5.5), we obtain

(F +)−1 · δF + + (δF−) · (F−)−1 = �, (5.6)

where

� = (F +)−1 · eiζJx · δ(G) · e−iζJx · (F−)−1. (5.7)

From (2.31) and (5.4), we have

(F +)−1 = + · P + · (M+)−1, (F−)−1 = (M−)−1 · P − · A−. (5.8)

We now evaluate � as given by (5.7), using (5.6) and varying G as given by (5.3). For thiscalculation, we found it appropriate to use a symbolic computation software (Macsyma). Onethereby obtains

� = μ+1 · μA+

2 δρ12 − μ−3 · μA−

2 δρ32 + ψ2 · μA−1 δσ21

−ψ2 · μA+3 δσ23 + μ−

3 · μA−1 δσ31 − μ+

1 · μA+3 δσ13. (5.9)

One observes that the middle two terms contain ψ2 in the coefficients of the perturbation ofthe reflection coefficients. If we eliminate the ψ2’s in favor of the μ’s, we would have

� = μ+1 · μA+

2 δρ12 − μ−3 · μA−

2 δρ32 + μ−2 · μA−

1 δσ21 − μ+2 · μA+

3 δσ23

+ μ−3 · μA−

1 (δσ31 + ρ32δσ21) − μ+1 · μA+

3 (δσ13 + ρ12δσ23). (5.10)

Perturbing the off-diagonal elements of G as given in (5.3) gives us all possibleperturbations of the continuous scattering data for inversion about x = +∞. Then from(5.4), (5.6) and (5.7), we can relate those to variations of the Jost functions, from whoseasymptotics we can obtain the variations in the six potentials.

21

Page 23: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

Now (5.6) can be viewed as a Riemann–Hilbert problem. Whence with the use ofCauchy’s Theorem, we can extend (F +

+ )−1 · δF ++ into the UHP, and so on. Note that the F’s

are basically Jost functions or their adjoints, while the matrix G basically consists of reflectioncoefficients. Once we have extended (F +

+ )−1 · δF ++ into the UHP, we may then address its

asymptotics. From the asymptotics of δF for large |ζ |, we can then obtain δQ, which willthen be related to the perturbations of the reflection coefficients contained in δG.

Consider the equation

f +(ζ ) + f −(ζ ) = �(ζ), (5.11)

where

f + = (F +)−1 · δF + and f − = (δF−) · (F−)−1 (5.12)

are analytic in the appropriate half-plane, except for a possible finite number of simple poles,with each vanishing like O(1/ζ ) as |ζ | → ∞. Let us assume that the class of perturbations ofthe reflection coefficients are L(1) so that the integral of |�(ζ)|, along the real ζ -axis, is finite.Then we have for ζ in the UHP:

f +(ζ )= 1

2π i

∫R

�(ζ ′) dζ ′

ζ ′ − ζ−

N+11∑

k=1

f +11,k

ζ +11,k − ζ

−N+

33∑k=1

f +33,k

ζ +33,k − ζ

+N−

11∑k=1

f −11,k

ζ−11,k − ζ

+N−

33∑k=1

f −33,k

ζ−33,k − ζ

,

(5.13)

where f +11,k is the residue of f +(ζ ) at the kth zero of R+

11, ζ +11,k , in the UHP, and so on.

Typically, f ± has poles of order 1 and 2. In evaluating these, care needs to be taken since theJost functions μ±

2 and μA±2 can also have poles in the appropriate half-plane.

Now consider (5.13) as |ζ | → ∞. Note that

F + = I3 + O(1/ζ ), δF + = i

2ζδB(1) + O(1/ζ 2), . . . , (5.14)

as |ζ | → ∞, where B(1) is given by (2.39), from which we have

δB(1) =⎡⎣ X δQ12/(J1 − J2) δQ13/(J1 − J3)

δQ21/(J2 − J1) X δQ23/(J2 − J3)

δQ31/(J3 − J1) δQ32/(J3 − J2) X

⎤⎦ . (5.15)

In the limit of |ζ | → ∞ in the UHP, from (5.13) and (5.14), we have

δB(1) =∫R

dζ ′

π�(ζ ′) − 2i

N+11∑

k=1

f +11,k − 2i

N+33∑

k=1

f +33,k + 2i

N−11∑

k=1

f −11,k + 2i

N−33∑

k=1

f −33,k, (5.16)

which relates the variations in Q to the variations in �, f +ii,k and f −

ii,k , the latter of which wecan relate to the perturbations in the bound-state scattering data.

Directly from the definitions, it follows that

f + = μ+ · P + · δμA+ − δR+11(

R+11

)2 μ+1 · μA+

1 , f − = δμ− · P − · μA− − δS−11(

S−11

)2 μ−1 · μA−

1 .

(5.17)

Let us start by evaluating the residue of f + at the kth zero of R+11. First, according to (2.19),

we have that μ+2 has a simple pole at this zero. As a consequence, the first term in the first

equation in (5.17) has only a first-order pole while the second term has a second-order pole.Furthermore, the ψ2 part of μ+

2 has a zero residue. Expanding the first equation in (5.17) and

22

Page 24: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

collecting the residues of it at the kth zero of R+11, one obtains

f +11,k =

(1

R+′11

)k

[μ+

1

(δμA+

1 − R12δμA+2

)]k

+

[R+′′

11

R+′11

]k

(μA+

1 μA+1 δR+

11

)k

−(

1

R+′11

)2

k

[∂ζ

(μ+

1μA+1 δR+

11

)]k. (5.18)

To evaluate this, we will need the relationship between μA+1 and μA+

2 at the zero. Thiswill most easily follow from applying the assumption of compact support to (2.34) and (2.35).The argument is that under compact support, μA− has no pole at the zeros of R+

11. Whence theresidues of the right-hand side of the second equation in (2.34), at these zeros, must vanish.Similar considerations can be applied at the zeros of R−

33 in the LHP. For the zeros of S−11 and

S+33 in their respective half-planes, one would apply the same analysis, but use relations (2.24)

and (2.25) instead. The net result is that

μA+1,k = R12,kμ

A+2,k at the k th zero of R+

11,

μ+3,k = S23,kμ

+2,k at the k thzero of S+

33,

μ−1,k = S21,kμ

−2,k at the k th zero of S−

11,

μA−3,k = R32,kμ

A−2,k at the k th zero of R−

33.

(5.19)

Applying (4.13) and the first relation in (5.19) to (5.18), we obtain

f +11,k = (μ+

1 μA+2

)kδC12,k + C12,k

[∂ζ

(μ+

1 μA+2

)]kδζ +

11,k. (5.20)

Similarly with the other zeros, we find

f −33,k = − (μ−

3 μA−2

)kδC32,k − C32,k

[∂ζ

(μ−

3 μA−2

)]kδζ−

33,k, (5.21)

f −11,k = (μ−

2 μA−1

)kδC21,k + C21,k

[∂ζ

(μ−

2 μA−1

)]kδζ−

11,k, (5.22)

f +33,k = −(μ+

2 μA+3

)kδC23,k − C23,k

[∂ζ

(μ+

2 μA+3

)]kδζ +

33,k. (5.23)

With this, we have completed the calculations necessary to obtain the variations in thepotentials due to perturbations in the scattering data for inversion about +∞. In the nextsection, we shall combine the results in this and the previous section to obtain the SE and ASEand to obtain the inner products between the SE and ASE as well as the closure relation.

6. The square eigenfunctions (SE), adjoint square eigenfunctions (ASE), and their innerproducts and closure relations

For inversion about +∞, the SEs are the coefficients of the variations presented in theexpressions obtained in the preceding section. Looking over the various components ofthe SE in the above expressions, it becomes clear that a simple labeling according to thecomponents of the Jost functions is impractical. So before proceeding further, let us devisea simpler labeling system for this set which is associated with inversion about +∞. First wehave six potentials. We require only six of the nine components in the 3 × 3 matrix (5.15),which are the 12, 13, 21, 23, 31 and 32 components. These we can just place into a columnvector, in an increasing order per their components. We will also need a square matrix to carry

23

Page 25: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

the differences in the diagonal elements of J. So we take

δQ(x) =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

δQ12(x)

δQ13(x)

δQ21(x)

δQ23(x)

δQ31(x)

δQ32(x)

⎤⎥⎥⎥⎥⎥⎥⎥⎦

,

M =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

(J1 − J2) 0 0 0 0 00 (J1 − J3) 0 0 0 00 0 (J2 − J1) 0 0 00 0 0 (J2 − J3) 0 00 0 0 0 (J3 − J1) 00 0 0 0 0 (J3 − J2)

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

(6.1)

Once we have this order determined, we can set up each SE as a column matrix, like the onein (6.1). From (5.10), (5.15) and (5.16), we can now identify the individual SEs which belongto the continuous spectrum and similarly from (5.15) and (5.16), and (5.20)–(5.23), we canidentify those which belong to the bound-state spectra.

For the continuous spectra, we require six states. These are

Z+ij (x, ζ ) =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

μ+1iμ

A+j2

μ+1iμ

A+j3

μ+2iμ

A+j1

μ+2iμ

A+j3

μ+3iμ

A+j1

μ+3iμ

A+j2

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

, for (i, j) = (1, 2), (1, 3), (2, 3), (6.2)

and

Z−ij (x, ζ ) =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

μ−1iμ

A−j2

μ−1iμ

A−j3

μ−2iμ

A−j1

μ−2iμ

A−j3

μ−3iμ

A−j1

μ−3iμ

A−j2

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

, for (i, j) = (2, 1), (3, 1), (3, 2), (6.3)

which are the SE for the continuous spectra.For the bound states, we do not require the (i, j) = (1, 3), (3, 1) states. For the zeros of

R+11, we have

Z+12,k(x) = Z+

12

(x, ζ +

11,k

), Z+

d,12,k(x) = [∂ζ

(Z+

12(x, ζ ))]

k. (6.4)

Similarly for the zeros of S+33, we have

Z+23,k(x) = Z+

23

(x, ζ +

33,k

), Z+

d,23,k(x) = [∂ζ

(Z+

23(x, ζ ))]

k, (6.5)

and for the zeros of S−11, we have

Z−21,k(x) = Z−

21

(x, ζ−

11,k

), Z−

d,21,k(x) = [∂ζ

(Z−

21(x, ζ ))]

k, (6.6)

24

Page 26: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

and lastly, for the zeros of R−33, we have

Z−32,k(x) = Z−

32

(x, ζ−

33,k

), Z−

d,32,k(x) = [∂ζ

(Z−

32(x, ζ ))]

k. (6.7)

From (5.10), (5.15) and (5.16), it follows that we have

M−1 · δQ(x) =∫ ∞

−∞

π�(x, ζ ) + 2i

[�+

11(x) + �+33(x) + �−

11(x) + �−33(x)

], (6.8)

where we have defined

�(x, ζ ) ≡ Z+12(x, ζ )δρ12 − Z−

32(x, ζ )δρ32 + Z−21(x, ζ )δσ21 − Z+

23(x, ζ )δσ23

+ Z−31(x, ζ ) (δσ31 + ρ32δσ21) − Z+

13(x, ζ )(δσ13 + ρ12δσ23), (6.9)

�+11(x) ≡ −

N+11∑

k=1

[C12,kZ

+d,12,k(x)δζ +

11,k + Z+12,k(x)δC12,k

], (6.10)

�+33(x) ≡

N+33∑

k=1

[C23,kZ

+d,23,k(x)δζ +

33,k + Z+23,k(x)δC23,k

], (6.11)

�−11(x) ≡

N−11∑

k=1

[C21,k, Z

−d,21,k(x)δζ−

11,k + Z−21,k(x)δC21,k

], (6.12)

�−33(x) ≡ −

N−33∑

k=1

[C32,k, Z

−d,32,k(x)δζ−

33,k + Z−32,k(x)δC32,k

]. (6.13)

Looking at the above, these results generally agree and correlate with what we knowfrom the AKNS [20, 21, 25] and Sasa–Satsuma [26, 31, 37] cases. However, there are somedifferences. As already noted, we have six reflection coefficients and only four classes ofbound states, with the σ13 and σ31 coefficients being somehow different. For example, undercompact support, extending σ13 into the UHP, it has a pole at the zeros of S+

33. However, thevalue of S13 at that pole has no influence at all on the bound state scattering data. Similarlyfor σ31 in the LHP.

Let us take up the adjoint squared eigenfunctions and take them to be row vectors. Forthe continuous spectra, three of the ASE are given by (i, j) = (1, 2), (1, 3), (2, 3) where

ZA+ij (x, ζ ) = [μA+

i1 μ+2j , μ

A+i1 μ+

3j , μA+i2 μ+

1j , μA+i2 μ+

3j , μA+i3 μ+

1j , μA+i3 μ+

2j

](x, ζ ). (6.14)

The other three for the continuous spectra are given by (i, j) = (2, 1), (3, 1), (3, 2) where

ZA−ij (x, ζ ) = [μA−

i1 μ−2j , μ

A−i1 μ−

3j , μA−i2 μ−

1j , μA−i2 μ−

3j , μA−i3 μ−

1j , μA−i3 μ−

2j

](x, ζ ). (6.15)

For the bound states of the ASE, we again do not need the (13) or the (31) states. Considerthe variations of the eigenvalues in (4.14)–(4.17). They are given by diagonal elements ofD±

ij evaluated at the eigenvalue. While we could define a new state such as ZA±11 (x, ζk), etc,

such would be awkward, particularly when we consider contour integral formulations of theresults to come. Also, such a state would only occur for the variations in the eigenvalues. Soit would be best to convert this diagonal element into something resembling the ZA± states,but extended off the real axis and into the appropriate half-plane. One can do this by using(4.23) and the second equations found in each of (4.26)–(4.28). Toward that end, at the zerosof R+

11, for (i, j) = (1, 2), we find that if we define the ASE states

ZA+12,k(x) =

{R+

11(ζ )

R+′11(ζ )

ZA+12 (x, ζ )

}k

, (6.16)

25

Page 27: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

and at the zeros of S+33, for (i, j) = (2, 3), if we define

ZA+23,k(x) =

{S+

33(ζ )

S+′33(ζ )

ZA+23 (x, ζ )

}k

, (6.17)

then these states will become equivalent to the ZA±11 , etc states when evaluated at the appropriate

zero. In the LHP, we similarly define

ZA−21,k(x) =

{S−

11(ζ )

S−′11 (ζ )

ZA−21 (x, ζ )

}k

and ZA−32,k(x) =

{R−

33(ζ )

R−′33 (ζ )

ZA−32 (x, ζ )

}k

. (6.18)

Note that μ2 or μA2 inside the above ZA±

ij,k states could have poles at the appropriate zero.However, the effect of including the appropriate diagonal element of S or R into these definitionseffectively causes μ2 or μA

2 to be replaced with the corresponding χ2 or χA2 , which have no

poles when evaluated at that zero.The next ASE states are the derivatives, with respect to ζ , of the above bound states,

evaluated at those zeros. These states are

ZA+d,12,k(x) =

{∂ζ

(R+

11(ζ )

R+′11(ζ )

ZA+12 (x, ζ )

)}k

at the zeros of R+11, (6.19)

ZA+d,23,k(x) =

{∂ζ

(S+

33(ζ )

S+′33(ζ )

ZA+23 (x, ζ )

)}k

at the zeros of S+33, (6.20)

ZA−d,32,k(x) =

{∂ζ

(S−

11(ζ )

S−′11 (ζ )

ZA−32 (x, ζ )

)}k

at the zeros of S−11, (6.21)

ZA−d,21,k(x) =

{∂ζ

(R−

33(ζ )

R−′33 (ζ )

ZA−21 (x, ζ )

)}k

at the zeros of R−33. (6.22)

We note that in this notation, (4.4) becomes equivalent to

D±i,j (ζ ) = −

∫ ∞

−∞ZA±

ij (x, ζ ) · δQ(x) dx. (6.23)

Let us now give the previous results on the variations of the scattering data in termsof these new states. We have that the variations in the reflection coefficients, (4.5)–(4.10),become

δσ21(ζ ) = −1

S−11(ζ )

∫ ∞

−∞ZA−

21 (x, ζ ) · δQ(x) dx, (6.24)

δσ31(ζ ) + ρ32(ζ )δσ21(ζ ) = −1

S−11(ζ )R−

33(ζ )

∫ ∞

−∞ZA−

31 (x, ζ ) · δQ(x) dx, (6.25)

δσ13(ζ ) + ρ12(ζ )δσ23(ζ ) = −1

S+33(ζ )R+

11(ζ )

∫ ∞

−∞ZA+

13 (x, ζ ) · δQ(x) dx, (6.26)

δσ23(ζ ) = −1

S+33(ζ )

∫ ∞

−∞ZA+

23 (x, ζ ) · δQ(x) dx, (6.27)

δρ12(ζ ) = 1

R+11(ζ )

∫ ∞

−∞ZA+

12 (x, ζ ) · δQ(x) dx, (6.28)

δρ32(ζ ) = 1

R−33(ζ )

∫ ∞

−∞ZA−

32 (x, ζ ) · δQ(x) dx. (6.29)

26

Page 28: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

For the variations in the bound-state eigenvalues, using (4.23) and the second equationsfound in each of (4.26)–(4.28), we can express (4.14)–(4.17) in terms of (6.16)–(6.18). Theresult is

δζ +11,k = 1(

R+′11

)kC12,k

∫ ∞

−∞ZA+

12,k(x) · δQ(x) dx, (6.30)

δζ +33,k = −1(

S+′33

)kC23,k

∫ ∞

−∞ZA+

23,k(x) · δQ(x) dx, (6.31)

δζ−11,k = −1(

S−′11

)kC21,k

∫ ∞

−∞ZA−

21,k(x) · δQ(x) dx, (6.32)

δζ−33,k = 1(

R−′33

)kC32,k

∫ ∞

−∞ZA−

32,k(x) · δQ(x) dx. (6.33)

For the normalization coefficients, (4.25)–(4.31) become

δC12,k = 1(R+′

11

)k

∫ ∞

−∞ZA+

d,12,k(x) · δQ(x) dx, (6.34)

δC23,k = −1(S+′

33

)k

∫ ∞

−∞ZA+

d,23,k(x) · δQ(x) dx, (6.35)

δC21,k = −1(S−′

11

)k

∫ ∞

−∞ZA−

d,21,k(x) · δQ(x) dx, (6.36)

δC32,k = 1(R−′

33

)k

∫ ∞

−∞ZA−

d,32,k(x) · δQ(x) dx. (6.37)

Given the above results, we can obtain the inner products between the SE and the ASE.Note that by (6.8), given any linear perturbations in the scattering data, one can obtainthe variations in the potential. On the opposite side, per (6.24)–(6.37), given any linearperturbation in the potential, one can obtain the resulting variations in the scattering data. Soif we take (6.8) and insert it into (6.24)–(6.37), then we must come back to identically thesame thing. Requiring the result of this substitution to be of this form, this gives us∫ ∞

−∞dx ZA+

12 (x, ζ ) · M · Z+12(x, ζ ′) = −πR+

11δ(ζ − ζ ′), (6.38)

∫ ∞

−∞dx ZA+

13 (x, ζ ) · M · Z+13(x, ζ ′) = −πS+

33R+11δ(ζ − ζ ′), (6.39)

∫ ∞

−∞dx ZA+

23 (x, ζ ) · M · Z+23(x, ζ ′) = −πS+

33δ(ζ − ζ ′), (6.40)

∫ ∞

−∞dx ZA−

21 (x, ζ ) · M · Z−21(x, ζ ′) = +πS−

11δ(ζ − ζ ′), (6.41)

∫ ∞

−∞dx ZA−

31 (x, ζ ) · M · Z−31(x, ζ ′) = +πS−

11R−33δ(ζ − ζ ′), (6.42)

∫ ∞

−∞dx ZA−

32 (x, ζ ) · M · Z−32(x, ζ ′) = +πR−

33δ(ζ − ζ ′), (6.43)

27

Page 29: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

and

∫ ∞

−∞dx ZA+

12

(x, ζ +

11,k

) · M · Z+d,12,k′(x) = − i

2(R+′

11)kC12,kδk′k

(for k, k′ = 1, 2, . . . , N+11), (6.44)∫ ∞

−∞dx ZA+

23

(x, ζ +

33,k

) · M · Z+d,23,k′(x) = − i

2

(S+′

33

)kC23,kδ

k′k

(for k, k′ = 1, 2, . . . , N+33), (6.45)∫ ∞

−∞dx ZA−

21

(x, ζ−

11,k

) · M · Z−d,21,k′(x) = +

i

2

(S−′

11

)kC21,kδ

k′k

(for k, k′ = 1, 2, . . . , N−11), (6.46)∫ ∞

−∞dx ZA−

32

(x, ζ−

33,k

) · M · Z−d,32,k′(x) = +

i

2

(R−′

33

)kC32,kδ

k′k

(for k, k′ = 1, 2, . . . , N−33), (6.47)

and∫ ∞

−∞dx ZA+

d,12,k(x) · M · Z+12,k′(x) = − i

2

(R+′

11

)kδk′k (for k, k′ = 1, 2, . . . , N+

11), (6.48)

∫ ∞

−∞dx ZA+

d,23,k(x) · M · Z+23,k′(x) = +

i

2

(S+′

33

)kδk′k (for k, k′ = 1, 2, . . . , N+

33), (6.49)

∫ ∞

−∞dx ZA−

d,21,k(x) · M · Z−21,k′(x) = − i

2

(S−′

11

)kδk′k (for k, k′ = 1, 2, . . . , N−

11), (6.50)

∫ ∞

−∞dx ZA−

d,32,k(x) · M · Z−32,k′(x) = +

i

2

(R−′

33

)kδk′k (for k, k′ = 1, 2, . . . , N−

33), (6.51)

where δk′k is the Kronecker delta function and δ(x) is the Dirac delta function. All other

possible inner products vanish. As a comment, since δσ13 and δσ31 always and only occur inthe combinations of δσ13(ζ ) + ρ12(ζ )δσ23(ζ ) and δσ31(ζ ) + ρ32(ζ )δσ21(ζ ), the inner productsof the ASE and SE for the continuous spectrum are diagonal, as above.

Now, let us do the opposite: take (6.24)–(6.37) and insert these equations into (6.8). Thensince all the δQ’s are linearly independent and arbitrary for all x, it follows that∫ ∞

−∞

π

{1

S−11(ζ )

Z−21(x, ζ ) · ZA−

21 (y, ζ )

+1

S−11(ζ )R−

33(ζ )Z−

31(x, ζ ) · ZA−31 (y, ζ ) +

1

R−33(ζ )

Z−32(x, ζ ) · ZA−

32 (y, ζ )

}

−∫ ∞

−∞

π

{1

R+11(ζ )

Z+12(x, ζ ) · ZA+

12 (y, ζ )

+1

S+33(ζ )R+

11(ζ )Z+

13(x, ζ ) · ZA+13 (y, ζ ) +

1

S+33(ζ )

Z+23(x, ζ ) · ZA+

23 (y, ζ )

}

+ 2iN+

11∑k=1

1(R+′

11

)k

[Z+

d,12,k(x) · ZA+12,k(y) + Z+

12,k(x) · ZA+d,12,k(y)

]28

Page 30: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

+ 2iN+

33∑k=1

1(S+′

33

)k

[Z+

d,23,k(x) · ZA+23,k(y) + Z+

23,k(x) · ZA+d,23,k(y)

]

+ 2iN−

11∑k=1

1(S−′

11

)k

[Z−

d,21,k(x) · ZA−21,k(y) + Z−

21,k(x) · ZA−d,21,k(y)

]

+ 2iN−

33∑k=1

1(R−′

33

)k

[Z−

d,32,k(x) · ZA−32,k(y) + Z−

32,k(x) · ZA−d,32,k(y)

]= δ(x − y)M−1, (6.52)

which is the closure relation.In the case of compact support, one can readily show that, upon using (4.23), the second

equations found in each of (4.26)–(4.28) and (5.19), this closure relation has the expectedrepresentation∫

C

π

1

S−11(ζ )

Z−21(x, ζ ) · ZA−

21 (y, ζ ) +∫C

π

1

S−11(ζ )R−

33(ζ )Z−

31(x, ζ ) · ZA−31 (y, ζ )

+∫C

π

1

S−11(ζ )

Z−32(x, ζ ) · ZA−

32 (y, ζ ) −∫C

π

1

R+11(ζ )

Z+12(x, ζ ) · ZA+

12 (y, ζ )

−∫C

π

1

S+33(ζ )R+

11(ζ )Z+

13(x, ζ ) · ZA+13 (y, ζ )

−∫C

π

1

S+33(ζ )

Z+23(x, ζ ) · ZA+

23 (y, ζ ) = δ(x − y)M−1, (6.53)

where C is the standard contour in the LHP which goes from −∞ on the real ζ -axis to ζ = +∞on the real ζ -axis, while going under all zeros of S−

11(ζ ) and R−33(ζ ), while C is the standard

contour in the UHP going from ζ = −∞ on the real ζ -axis to ζ = +∞ on the real ζ -axis,while going above all zeros of S+

33(ζ ) and R+11(ζ ). See, for instance, [22, 23]. In order to

verify the above relation, one has to keep in mind that some of the μ2’s and μA2 ’s will have

poles in the respective half-planes.

7. Conclusion

What we have done is to take the procedure outlined in [25] and obtained the covering set ofsquared eigenfunctions and adjoint squared eigenfunctions for the 3 × 3 eigenvalue problemgiven in (1.1). We see that this covering set is a set of products of the Jost solutions andthe adjoint Jost solutions. From this covering set, upon applying the proper reductions to thepotential matrix, Q, and the resulting symmetries of the scattering matrix, S, and its inverse,R, one can then obtain the squared eigenfunctions and adjoint squared eigenfunctions for the3WRI [22, 23].

The form of the LDRs in section 3 was designed to follow the results in [22, 23]. Weinitially attempted to obtain the LDRs by using only the χ ’s, which are analytic in theappropriate half-planes. However, when that was done, we found no simple and obvious wayto reduce the corresponding six LDRs to a set of only three independent equations containingthree independent χ ’s. This is not to say that such could not be done. If it should turn out thatthis could be done, then we would have another set of LRDs and another set of scattering data,and all presumably equivalent to what is given here. On the other hand, the method used herewas one which ensured that one always had, for all ζ in the complex plane and for all real x,

29

Page 31: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

a basis of Jost functions which spanned the rank three vector space. In contrast, the χ ’s havethe distinct disadvantage that wherever R+

11, S+33, S−

11 and R−33 had zeros, one would not have

a complete basis since the ψ2 or the ψA2 component would be missing. However, the manner

in which that consideration affects the ability to construct a workable set of LDRs for the χ ’sremains to be understood.

There is also another argument which could be made in favor of the LDRs used here.What we have done can be seen to be following the idea of developing LDRs for the Jostfunctions � or �. From relationships (2.15) and (2.17), one can construct linear dispersionrelations for the extension of ψ2 into the UHP and the LHP, as was done in [22, 40]. Doingthis for all the columns in � will give one exactly the required number of independent LDRs,since there will be one for each independent ψj . Then expressing these in terms of the χ ’s (orthe μ’s) would give one a set of LDRs equivalent to the set given here.

We have also discussed the two types of solitons which occur in this 3 × 3 eigenvalueproblems. Features of these have been described at the end of section 3. What is to benoted is that the Q13 component of the sl(3) (resonant) soliton is generally unstable. One cancreate initial data which will generate the Q13 component of the sl(3) soliton. However, thiscomponent is generally transient and unless carefully nurtured, will eventually decay back intothe two sl(2) (AKNS) solitons from which it was created. Obviously, in those IST modelswhere one has a 3 × 3 eigenvalue problem, one should not expect a strict conservation of thetotal number or type of solitons.

It is also interesting that the results here are quite similar to the revisited AKNS results[25]. We particularly note that the expressions for the bound-state ASE are functionally thesame. All that really differs for the bound state part is the indexing. We remark on the non-correspondence between the number of reflection coefficients and the number of sets of boundstates, being six and four respectively for the general 3 × 3 system. The number of reflectioncoefficients must match the number of independent potential components while the numberof sets of bound states should equal the number of independent ‘transmission coefficients’, inthis case R+

11, S+33, S−

11 and R−33. There are the higher order solitons which can arise from equal

zeros of any two of these transmission coefficients in the appropriate half-plane. Lastly wemention one method for the classification of the various types of solitons which will arise inthese systems [18].

Acknowledgments

The authors would like to thank two anonymous referees for their comments, which contributedsignificantly to the final version of this paper. This research has been supported in part byNSF grant number DMS-0505566.

Appendix

Given the scattering data stated at the end of section 3, we wish to reconstruct the matricesR and S. First we shall detail how the off-diagonal elements may be calculated, given thediagonal elements. Then we shall determine the diagonal elements to the degree that we arenot required to know the bound-state eigenvalues. Lastly we shall use the bound-state spectraand the required analytical properties to construct the final solution. The procedure used hereis equivalent to that of the appendix in [22], but is given in the notation used herein.

30

Page 32: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

We start with the reflection coefficients on the real ζ -axis for inversion about +∞. Wehave the following matrix components known in terms of the diagonal elements:

Sj1 = σj1S−11 (j = 2, 3), Sj3 = σj3S

+33 (j = 1, 2), R12 = ρ12R

+11, R32 = ρ32R

−33.

(A.1)

Next we express the product det(S) S−1 in terms of the components of S. Since Det(S) = 1,it follows that this expression must also equal R. Thus, we may construct the matrix equation

R − det(S)S−1 = 0, (A.2)

and use (A.1) to eliminate six off-diagonal components of R and S from this matrix equation,in favor of the reflection coefficients and diagonal components of the same. After doing that,take the [1, 2], [2, 2] and [3, 2] components of this matrix equation and solve these threeconditions for S12, S32 and R22, obtaining

R22 = S−11S

+33(1 − σ13σ31), (A.3)

S12 = −S22(α3σ13ρ32 + α1ρ12), S32 = −S22(α3ρ32 + α1ρ12σ31) (A.4)

where

α1 = R+11 S−

11

R22 S22, α3 = R−

33S+33

R22S22. (A.5)

This has now given us expressions for all off-diagonal components of S.Taking next the [1, 1] and [3, 3] components of (A.2), we find that we can solve for α1

and α3, obtaining

α1 = β1

D, α3 = β3

D(A.6)

where

β1 = 1 − σ13 σ31 + ρ32(σ23 − σ21 σ13 ), β3 = 1 − σ13σ31 + ρ12(σ21 − σ23σ31), (A.7)

D = (1 − σ13σ31)(1 − σ13σ31 − σ13ρ32σ21 − σ31ρ12σ23 − ρ12σ21σ23ρ32). (A.8)

Taking now the [2, 2] component of the new matrix equation S · R = I3, one can obtainS22. This condition can be seen to be equivalent to requiring det(S) = 1. One has

S22 = D

β1R2,2β3. (A.9)

Given this result, all the components of S and R can now be given in terms of the six reflectioncoefficients and S−

11 and S+33. We have for the diagonal components

R+11S

−11 = 1

β3, R22S22 = D

β1β3, R−

33S+33 = 1

β1(A.10)

where R22 is given by (A.3). S12 and S32 in (A.4) now become

S12 = ρ12σ23ρ32 + σ13ρ32 + ρ12

β1β3S−11S

+33

, S32 = ρ12σ21ρ32 + ρ32 + ρ12σ31

β1β3S−11S

+33

. (A.11)

The components of R can now be calculated from the above and the remaining componentsof the matrix equation (A.2). The diagonal components are given in (A.3) and (A.10). Givingnow all of the non-diagonal components of R, we have

R12 = ρ12

S−11β3

, R13 = −(σ13 + ρ12σ23)

S−11β3

, (A.12)

31

Page 33: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

R21 = −S−11S

+33(σ21 − σ23σ31), R23 = −S−

11S+33(σ23 − σ21σ13), (A.13)

R31 = −(σ31 + ρ32σ21)

S+33β1

, R32 = ρ32

S−33β1

. (A.14)

The last thing to do is to construct R+11, R−

33, S−11 and S+

33. In analogy to [22], let us definethe functions h±

i (ζ ) (for i = 1, 3) by

h+1(ζ ) ≡ R+

11(ζ )

N+11∏

k=1

(ζ − ζ +∗

11,k

ζ − ζ +11,k

), h+

3(ζ ) ≡ S+33(ζ )

N+33∏

k=1

(ζ − ζ +∗

33,k

ζ − ζ +33,k

), (A.15)

h−1 (ζ ) ≡ S−

11(ζ )

N−11∏

k=1

(ζ − ζ−∗

11,k

ζ − ζ−11,k

), h−

3 (ζ ) ≡ R+33(ζ )

N−33∏

k=1

(ζ − ζ−∗

33,k

ζ − ζ−33,k

), (A.16)

where ∗ denotes complex conjugation. Then, ln h+i (ζ ) (ln h−

i (ζ )) are analytic in the UHP(LHP) and must vanish at least as fast as 1/ζ , as |ζ | → ∞, in each respective half-plane.Thus, for ζ in the UHP,

ln h+1(ζ ) = − 1

2π i

∫ ∞

−∞

dζ ′

ζ ′ − ζln β3(ζ

′)

−∫ ∞

−∞

dζ ′

2π i

⎧⎨⎩

N+11∑

k=1

ln

(ζ ′ − ζ +

11,k

ζ ′ − ζ +∗11,k

)+

N−11∑

k=1

ln

(ζ ′ − ζ−

11,k

ζ ′ − ζ−∗11,k

)⎫⎬⎭ , (A.17)

ln h+3(ζ ) = − 1

2π i

∫ ∞

−∞

dζ ′

ζ ′ − ζln β1(ζ

′)

−∫ ∞

−∞

dζ ′

2π i

⎧⎨⎩

N+33∑

k=1

ln

(ζ ′ − ζ +

33,k

ζ ′ − ζ +∗33,k

)+

N−33∑

k=1

ln

(ζ ′ − ζ−

33,k

ζ ′ − ζ−∗33,k

)⎫⎬⎭ , (A.18)

while for ζ in the LHP

ln h−1 (ζ ) = 1

2π i

∫ ∞

−∞

dζ ′

ζ ′ − ζln β3(ζ

′)

+∫ ∞

−∞

dζ ′

2π i

⎧⎨⎩

N+11∑

k=1

ln

(ζ ′ − ζ +

11,k

ζ ′ − ζ +∗11,k

)+

N−11∑

k=1

ln

(ζ ′ − ζ−

11,k

ζ ′ − ζ−∗11,k

)⎫⎬⎭ , (A.19)

ln h−3 (ζ ) = 1

2π i

∫ ∞

−∞

dζ ′

ζ ′ − ζln β1(ζ

′)

+∫ ∞

−∞

dζ ′

2π i

⎧⎨⎩

N+33∑

k=1

ln

(ζ ′ − ζ +

33,k

ζ ′ − ζ +∗33,k

)+

N−33∑

k=1

ln

(ζ ′ − ζ−

33,k

ζ ′ − ζ−∗33,k

)⎫⎬⎭ . (A.20)

References

[1] Ablowitz M J, Kaup D J, Newell A C and Segur H 1974 The inverse scattering transform-Fourier analysis fornonlinear problems Stud. Appl. Math. LIII 249–315

32

Page 34: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

[2] Alber M S, Luther G G, Marsden J E and Robbins J M 1998 Geometric phases, reduction and Lie-Poissonstructure for the resonant three-wave interaction Physica D 123 271–90

[3] Baronio F et al 2009 Frequency generation and solitonic decay in three-wave interactions Opt. Express 17 13889[4] Beals R and Coifman R R 1984 Scattering and inverse scattering for first order systems Commun. Pure Appl.

Math. 37 39–90[5] Beals R and Coifman R R 1985 Inverse scattering and evolution equations Commun. Pure Appl. Math. 38 29–42[6] Buryak A V and Kivshar Y S 1996 Stability of three-wave parametric solitons in diffractive quadratic media

Phys. Rev. Lett. 77 5210–3[7] Buryak A V, Di Trapani P, Skryabin D V and Trillo S 2002 Optical solitons due to quadratic nonlinearities:

from basic physics to futuristic applications Phys. Rep. 370 63–235[8] Calogero F and Degasperis A 2005 Novel solution of the system describing the resonant interaction of three

waves Physica D 200 242–256[9] Champneys A R and Malomed B A 2000 Embedded solitons in a three-wave system Phys. Rev. E 61 886–90

[10] Chirkin A S, Volkov V V, Laptev G D and Morozov E Y 2000 Consecutive three-wave interactions in nonlinearoptics of periodically inhomogeneous media Quantum Electron. 30 847–58

[11] Chow C C, Bers A and Ram A K 1992 Spatiotemporal chaos in the nonlinear three-wave interaction Phys. Rev.Lett. 68 3379–82

[12] Degasperis A, Conforti M, Baronio F and Wabnitz S 2006 Stable control of pulse speed in parametric three-wavesolitons Phys. Rev. Lett. 97 093901

[13] Degasperis A, Conforti M, Baronio F and Wabnitz S 2007 Effects of nonlinear wave coupling: acceleratedsolitons Eur. Phys. J. Spec. Top. 147 233–252

[14] Gerdjikov V S and Khristov E Kh 1980 On the evolution equations, solvable through the inverse scatteringmethod: I. Spectral theory Bulg. J. Phys. 7 28

[15] Gerdjikov V S and Kulish P P 1981 The generating operator for the N × N linear system Physica D 3 549–64[16] Gerdjikov V S 1986 Generalized Fourier transforms for the soliton equations, gauge covariant formulation

Inverse Problems 2 51–74[17] Gerdjikov V S 2005 Basic aspects of soliton theory Geometry, Integrability and Quantization - VI

ed I M Mladenov and A C Hirshfeld (Sofia, Bulgaria: Softex) pp 78–122[18] Gerdjikov V S and Kaup D J 2006 How many types of soliton solutions do we know? Geometry, Integrability

and Quantization ed I M Mladenov and M De Leon (Sofia, Bulgaria: Softex) pp 11–34[19] Ibragimov E, Struthers A A, Kaup D J, Khaydarov J D and Singer K D 1999 Three-wave interaction solitons in

optical parametric amplification Phys. Rev. E 59 6122–37[20] Kaup D J 1976 A perturbation expansion for the Zakharov–Shabat inverse scattering transform SIAM J. Appl.

Math. 31 121[21] Kaup D J 1976 Closure of the squared Zakharov–Shabat eigenstates J. Math. Anal. Appl. 54 849–64[22] Kaup D J 1976 The three-wave interaction a non-dispersive phenomenon Stud. Appl. Math. 55 9–44[23] Kaup D J, Reiman A and Bers A 1979 Space-time evolution of nonlinear three-wave interactions: I. Interaction

in a homogeneous medium Rev. Mod. Phys. 51 275–310[24] Kaup D J and Lakoba T I 1996 Squared eigenfunctions of the massive Thirring model in laboratory coordinates

J. Math. Phys. 37 308[25] Kaup D J 2009 Integrable systems and squared eigenfunctions Theor. Math. Phys. 159 806–18 (Proceedings of

the workshop ‘Nonlinear Physics: Theory and Experiment. V’)[26] Kaup D J and Yang J 2009 The inverse scattering transform and squared eigenfunctions for a degenerate 3 × 3

operator Inverse Problems 25 105010[27] Mak W C K, Malomed B A and Chu P L 1998 Three-wave gap solitons in waveguides with quadratic nonlinearity

Phys. Rev. E 58 6708–22[28] Newell A C 1985 Solitons in Mathematics and Physics (Philadelphia: Society for Industrial Mathematics)

(ISBN 0-89871-196-7)[29] Reiman A 1979 Space-time evolution of nonlinear three-wave interactions: II. Interaction in an inhomogeneous

medium Rev. Mod. Phys. 51 311–30[30] Robinson P A and Drysdale P M 1996 Phase transition between coherent and incoherent three-wave interactions

Phys. Rev. Lett. 77 2698–701[31] Sasa N and Satsuma J 1991 New type of soliton solutions for a higher-order nonlinear Schrodinger equation

J. Phys. Soc. Japan 60 409–17[32] Shabat A B 1975 The inverse scattering problem for a system of differential equations Funct. Ann. Appl. 9 75–8

(In Russian)Shabat A B 1979 The inverse scattering problem Diff. Equ. 15 1824–34 (In Russian)

[33] Shchesnovich V S and Yang J 2003 General soliton matrices in the Riemann–Hilbert problem for integrablenonlinear equations J. Math. Phys. 44 4604

33

Page 35: The inverse scattering transform and squared eigenfunctions › ... › 142 › 2016 › 11 › Pub249.pdf · Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder V(x)is

Inverse Problems 26 (2010) 055005 D J Kaup and R A Van Gorder

[34] Stenflo L 1994 Resonant three-wave interactions in plasmas Phys. Scr. T 50 15–9[35] Sun C, Xu Y, Cui W, Huang G, Szeftel J and Hu B 2005 Three-wave soliton excitations in a disk-shaped

Bose–Einstein condensate Int. J. Mod. Phys. B 19 3563–74[36] Yaakobia O and Friedland L 2009 Equal energy phase space trajectories in resonant wave interactions Phys.

Plasmas 16 052306[37] Yang J and Kaup D J 2009 Squared eigenfunctions for the Sasa–Satsuma equation J. Math. Phys. 50 023504[38] Zakharov V E and Shabat A B 1971 Exact theory of two-dimensional self-focusing and one dimensional

self-modulation of waves in nonlinear media Zh. Eksp. Teor. Fiz. 61 118Zakharov V E and Shabat A B 1972 Sov. Phys.—JETP 34 62

[39] Zakharov V E and Manakov S V 1973 Resonant interaction of wave packets in nonlinear media Pis’ma Zh.Eksp. Teor. Fiz. 18 413

Zakharov V E and Manakov S V 1973 Sov. Phys.—JETP Lett. 18 243[40] Zakharov V E and Manakov S V 1975 The theory of resonant interaction of wave packets in nonlinear media

Zh. Eksp. Teor. Fiz. 69 1654Zakharov V E and Manakov S V 1976 Sov. Phys.—JETP 42 842

[41] Zakharov V E 1980 Solitons, Topics in Current Physics vol 17 ed R Bullough and P Caudrey (Berlin: Springer)Kuznetsov E A, Spector M D and Fal’kovick G E 1984 Physica D 10 379

[42] Zakharov V E, Manakov S V, Novikov S P and Pitaevskii L I 1984 Theory of Solitons: The Inverse ScatteringMethod (New York: Plenum, Consultants Bureau)

34