The Hermitian two matrix model with an even quartic potential

We consider the two matrix model with an even quartic potential W(y)=y^4/4+alpha y^2/2 and an even polynomial potential V(x). The main result of the paper is the formulation of a vector equilibrium problem for the limiting mean density for the eigenvalues of one of the matrices M_1. The vector equilibrium problem is defined for three measures, with external fields on the first and third measures and an upper constraint on the second measure. The proof is based on a steepest descent analysis of a 4 x 4 matrix valued Riemann-Hilbert problem that characterizes the correlation kernel for the eigenvalues of M_1. Our results generalize earlier results for the case alpha=0, where the external field on the third measure was not present.

defined on the space of pairs (M 1 , M 2 ) of n × n Hermitian matrices. The constant Z n in (1.1) is a normalization constant, τ ∈ R \ {0} is the coupling constant and dM 1 dM 2 is the flat Lebesgue measure on the space of pairs of Hermitian matrices. In (1.1), V and W are the potentials of the matrix model. In this paper, we assume V to be a general even polynomial and we take W to be the even quartic polynomial Without loss of generality we may (and do) assume that We are interested in describing the eigenvalues of M 1 in the large n limit.
In [45] the case α = 0 was studied in detail. An important ingredient in the analysis of [45] was a vector equilibrium problem that describes the limiting mean eigenvalue distribution of M 1 . In this paper we extend the vector equilibrium problem to the case α = 0.

Background
The two-matrix model (1.1) with polynomial potentials V and W was introduced in [59,70] as a model for quantum gravity and string theory. The interest is in the double scaling limit for critical potentials. It is generally believed that the two-matrix model is able to describe all (p, q) conformal minimal models, whereas the one-matrix model is limited to (p, 2) minimal models [30,41,48]. In [61] the two-matrix model was proposed for the study of the Ising model on a random surface, where the logarithm of the partition function (i.e., the normalizing constant Z n in (1.1)) is expected to be the generating function in the enumeration of graphs on surfaces. For more information and background on the physical interest we refer to the the surveys [39,40], and more recent physical papers [9,49,51,52] The two matrix model have a very rich integrable structure that is connected to biorthogonal polynomials, isomonodromy deformations, Riemann-Hilbert problems and integrable equations, see e.g. [2,10,12,13,14,46,50,60,66]. This is the basis of the mathematical treatment of the two matrix model, see also the survey [11].
The eigenvalues of the matrices M 1 and M 2 in the two-matrix model are a determinantal point process with correlation kernels that are expressed in terms of biorthogonal polynomials. These are two families of monic polynomials {p k,n (x)} ∞ k=0 and {q l,n (y)} ∞ y=0 , where p k,n has degree k and q l,n has degree l, satisfying the condition The polynomials are well-defined by (1.4) and have simple and real zeros [46]. Moreover, the zeros of p k,n and p k+1,n , and those of q l,n and q l+1,n are interlacing [43]. The kernels are expressed in terms of these biorthogonal polynomials and their transformed functions 1 h 2 k,n P k,n (y 1 )q k,n (y 2 ). (1.8) Then Eynard and Mehta [50,72], see also [24,37,71], showed that the joint probability density function for the eigenvalues x 1 , . . . , x n of M 1 and y 1 , . . . , y n of M 2 is given by P(x 1 , . . . , x n , y 1 , . . . , y n ) and the marginal densities take the form (1.9) · · · n−k+n−l times P(x 1 , . . . , x n , y 1 , . . . , y n )dx k+1 · · · dx n dy l+1 · · · dy n = (n − k)!(n − l)! (n!) 2 det In particular, by taking l = 0, so that we average over the eigenvalues y 1 , . . . , y n of M 2 , we find that the eigenvalues of M 1 are a determinantal point process with kernel K (n) 11 , see (1.5). This kernel is constructed out of the biorthogonal family {p k,n } ∞ k=0 and {Q l,n } ∞ l=0 and the associated determinantal point process is an example of a biorthogonal ensemble [23]. It is also an example of a multiple orthogonal polynomial ensemble in the sense of [62].
In order to describe the behavior of the eigenvalues in the large n limit, one needs to control the kernels (1.5)-(1.8) as n → ∞. Due to special recurrence relations satisfied by the biorthogonal polynomials, there exist Christoffel-Darboux-type formulas that express the n-term sums (1.5) and (1.8) into a finite number (independent of n) of biorthogonal polynomials and transformed functions, see [13]. This paper also gives differential equations and a remarkable duality between spectral curves, see also [12,14].
A Riemann-Hilbert problem for biorthogonal polynomials was first formulated in [46]. The Riemann-Hilbert problem in [46] is of size 2 × 2 but it is non-local and one has not been able to apply an asymptotic analysis to it. Local Riemann-Hilbert problems were formulated in [14,60,66], but these Riemann-Hilbert problems are of larger size, depending on the degrees of the potentials V and W . The formulation of a local Riemann-Hilbert problem for biorthogonal polynomials, however, opens up the way for the application of the Deift-Zhou [35] steepest descent method, which was applied very successfully to the Riemann-Hilbert problem for orthogonal polynomials, see [18,31,33,34] and many later papers.
In [45] the Deift-Zhou steepest descent method was indeed applied to the Riemann-Hilbert problem from [66] for the case where W is given by (1.2) with α = 0. It gave a precise asymptotic analysis of the kernel K (n) 11 as n → ∞, leading in particular to the local universality results that are well-known in one-matrix models [33]. The analysis in [45] was restricted to the genus zero case. The extension to higher genus was done in [75].

Vector equilibrium problem
As already stated, it is the purpose of the present paper to extend the results of [45,75] to the case of general α.
An important role in the analysis in [45] is played by a vector equilibrium problem that characterizes the limiting mean density for the eigenvalues of M 1 (and also gives the limiting zero distribution of the biorthogonal polynomials p n,n ). One of the main contributions of the present paper is the formulation of the appropriate generalization to general α ∈ R. We refer to the standard reference [78] for notions of logarithmic potential theory and equilibrium problems with external fields.

Case α = 0
Let us first recall the vector equilibrium problem from [45], which involves the minimization of an energy functional over three measures. For a measure µ on C we define the logarithmic energy and for two measures µ and ν we define the mutual energy I(µ, ν) = log 1 |x − y| dµ(x)dν(y).
A main feature of the vector equilibrium problem is that it involves an external field acting on the first measure as well as an upper constraint (1.11) acting on the second measure. Note that an upper constraint arises typically in the asymptotic analysis of discrete orthogonal polynomials, see e.g. [7,22,42,68,77]. The interaction between the measures in (1.10) is of the Nikishin type where consecutive measures attract each other, but there is no direct interaction between measures ν i and ν j if |i − j| ≥ 2. The notion of a Nikishin system originated in works on Hermite-Padé rational approximation, see [5,56,62,76]. Vector equilibrium problems also played a role in the recent papers [8,15,17,65] that are related to random matrix theory and [6,43,44,69,84] that are related to recurrence relations.
Comparing with (1.10) we see that there is an external field V 3 acting on the third measure as well. The vector equilibrium problem depends on the input data V 1 , V 3 , and σ 2 that will be described next. Recall that V is an even polynomial and that W is the quartic polynomial given by (1.2).
The minimum is attained at a value s = s 1 (x) ∈ R for which W (s) = τ x, that is (1.14) s 3 + αs = τ x.
For α ≥ 0, this value of s is uniquely defined by (1.14). For α < 0 there can be more than one real solution of (1.14). The relevant value is the one that has the same sign as x (since τ > 0, see (1.3)). It is uniquely defined, except for x = 0.
External field V 3 : The external field V 3 that acts on ν 3 is not present if α ≥ 0. Thus For α < 0, the external field V 3 (x) is non-zero only for x ∈ (−x * (α), x * (α)) where (1. 16) For those x, the equation (1.14) has three real solutions s 1 = s 1 (x), s 2 = s 2 (x), s 3 = s 3 (x) which we take to be ordered such that Thus the global minimum of s ∈ R → W (s) = τ xs is attained at s 1 , and this global minimum played a role in the definition (1.13) of V 1 . The function has another local minimum at s 2 and a local maximum at s 3 , and these are used in the definition of V 3 . We define V 3 : R → R by (1.17) (W (s 3 (x)) − τ xs 3 (x)) − (W (s 2 (x)) − τ xs 2 (x)) , for x ∈ (−x * (α), x * (α)), 0 elsewhere.
The constraint σ 2 : To describe the measure σ 2 that acts as a constraint on ν 2 , we consider the equation There is always a solution s on the imaginary axis. The other two solutions are either on the imaginary axis as well, or they are off the imaginary axis, and lie symmetrically with respect to the imaginary axis. We define where s(z) is the solution of (1.18) with largest real part. We then have for the support S(σ 2 ) of σ 2 , This completes the description of the vector equilibrium problem for general α. It is easy to check that for α = 0 it reduces to the vector equilibrium described before.

Solution of vector equilibrium problem
Our first main theorem deals with the solution of the vector equilibrium problem. We use S(µ) to denote the support of a measure µ. The logarithmic potential of µ is the function which is a harmonic function on C \ S(µ) and superharmonic on C.
(a) There is a constant 1 ∈ R such that for some N ∈ N and a 1 < b 1 < a 2 < · · · < a N < b N , and on each of the intervals [a j , b j ] in S(µ 1 ) there is a density and h j is non-negative and real analytic on [a j , b j ].
(c) We have and there is a constant c 3 ≥ 0 such that Moreover, µ 3 has a density that is positive and real analytic on then ρ 3 vanishes as a square root at x = ±c 3 .
(d) All three measures are symmetric with respect to 0, so that for j = 1, 2, 3 we have µ j (A) = µ j (−A) for every Borel set A.
In part (a) of the theorem it is stated that S(µ 1 ) is a finite union of intervals under the condition that S(µ 1 ) and S(σ 2 − µ 2 ) are disjoint. If this condition is not satisfied then we are in one of the singular cases that will be discussed in Section 1.5 below. However, the condition is not necessary as will be explained in Remark 4.9 below. We chose to include the condition in Theorem 1.1 since the focus of the present paper is on the regular cases.
The conditions (1.23), (1.26), and (1.29) are the Euler-Lagrange variational conditions associated with the vector equilibrium problem. We note the strict inequalities in (1.26) and (1.29). These are consequences of special properties of the constraint σ 2 and the external field V 3 that are listed in parts (b) and (c) of the following lemma.
Lemma 1.2. The following hold.
(c) Let α < 0. Let ν 2 be a measure on iR of finite logarithmic energy such that is a decreasing and convex function on (0, (x * (α)) 2 ). Lemma 1.2 is proved in Section 2 and the proof of Theorem 1.1 is given in Section 3.
A major role in what follows will be played by functions defined on a compact four-sheeted Riemann surface that we will introduce in Section 4. The sheets are connected along the supports S(µ 1 ), S(σ 2 − µ 2 ) and S(µ 3 ) of the minimizing measures for the vector equilibrium problem. The main result of Section 4 is Proposition 4.8 which says that the function defined by on the first sheet has an extension to a globally meromorphic function on the full Riemann surface. This very special property is due to the special forms of the external fields V 1 and V 3 and the constraint σ 2 , which interact in a very precise way.

Classification into cases
According to Theorem 1.1 the structure of the supports is the same for α > 0 as it was for α = 0 in [45], that is, S(µ 3 ) = R and S(σ 2 −µ 2 ) = iR\(−ic 2 , ic 2 ) for some c 2 > 0. The supports determine the underlying Riemann surface, and so the case α > 0 is very similar to the case α = 0. There are no phase transitions in case α > 0, except for the possible closing or opening of gaps in the support of µ 1 . These type of transitions already occur in the one-matrix model. For α < 0, however, certain new phenomena occur which come from the fact that the external field V 3 on µ 3 (defined in (1.17)) has its maximum at 0 and therefore tends to move µ 3 away from 0. As a result there are cases where S(µ 3 ) is no longer the full real axis, but a strict subset (1.30) with c 3 > 0.
In addition, it is also possible that c 2 = 0 in (1.27) such that S(σ 2 − µ 2 ) is the full imaginary axis and the constraint σ 2 is not active. These new phenomena already occur for the simplest case for which explicit calculations were done in [43] based on the coefficients in the recurrence relations satisfied by the biorthogonal polynomials. These calculations lead to the phase diagram shown in Figure 1.1 which is taken from [43]. There are four phases corresponding to the following four cases that are determined by the fact whether 0 is in the support of the measures µ 1 , σ 2 − µ 2 , µ 3 or not: Case III Case II Besides a singular cut structure for the Riemann surface, we can also have a singular behavior of the first measure µ 1 . These singular cases also appear in the usual equilibrium problem for the one-matrix model, see [33], and they are as follows.
Singular interior point for µ 1 : The density of µ 1 vanishes at an interior point of S(µ 1 ).
Singular endpoint for µ 1 : The density of µ 1 vanishes to higher order than square root at an endpoint of S(µ 1 ).
Singular exterior point for µ 1 : Equality holds in the variational inequality in (1.23) at a point x ∈ R \ S(µ 1 ).
The measures σ 2 − µ 2 and µ 3 cannot have singular endpoints, singular exterior points, or singular interior points, except at 0. Singular interior points of these measures at 0 are as follows.
While there is great interest in the singular cases we restrict the analysis in this paper to the regular cases, for which we give the following precise definition. Definition 1.3. The triplet (V, W, τ ) is regular if the supports of the minimizers from the vector equilibrium problem satisfy and if in addition, the measure µ 1 has no singular interior points, singular endpoints, or singular exterior points, and the measures σ 2 − µ 2 and µ 3 do not have a singular interior point at 0.
The condition (1.32) may be reformulated as

Limiting mean eigenvalue distribution
The measure µ 1 is the limiting mean eigenvalue distribution of the matrix M 1 in the two-matrix model as n → ∞. In this paper we prove this only for regular cases. To prove it for singular cases, one would have to analyze the nature of the singular behavior which is beyond the scope of what we want to do in this paper.
Theorem 1.4. Suppose (V, W, τ ) is regular. Let µ 1 be the first component of the minimizer (µ 1 , µ 2 , µ 3 ) of the vector equilibrium problem. Then µ 1 is the limiting mean distribution of the eigenvalues of M 1 as n → ∞ with n ≡ 0(mod 3).
We recall that the eigenvalues of M 1 after averaging over M 2 are a determinantal point process on R with a kernel K (n) 11 as given in (1.5). The statement of Theorem 1.4 comes down to the statement that where ρ 1 is the density of the measure µ 1 . The restriction to n ≡ 0(mod 3) is for convenience only, since it simplifies the expressions in the steepest descent analysis of the Riemann-Hilbert problem that we are going to do.
The existence of the limiting mean eigenvalue distribution was proved by Guionnet [57] in much more general context. She characterized the minimizer by a completely different variational problem, and also connects it with a large deviation principle. It would be very interesting to see the connection with our vector equilibrium problem. A related question would be to ask if it is possible to establish a large deviation principle with the energy functional (1.12) as a good rate function.
We are going to prove (1.33) by applying the Deift-Zhou steepest descent analysis to the Riemann-Hilbert problem (1.36) below. Without too much extra effort we can also obtain the usual universal local scaling limits that are typical for unitary random matrix ensembles. Namely, if ρ 1 (x * ) > 0 then the scaling limit is the sine kernel then the scaling is the Airy kernel, i.e., for some c > 0, we have Recall that we are in the regular case so that the density of ρ 1 vanishes as a square root at x * . The proofs of these local scaling limits will be omitted here, as they are very similar to the proofs in [45].
1.7 About the proof of Theorem 1.4 The first step in the proof of Theorem 1.4 is the setup of the Riemann-Hilbert (RH) problem for biorthogonal polynomial p n,n and its connection with the correlation kernel K (n) 11 . We use the RH problem of [66] which we briefly recall.
The RH problem of [66] is based on the observation that the polynomial p n,n that is characterized by the biorthogonality conditions (1.4) can alternatively be characterized by the conditions (we assume W is quartic and n is a multiple of three) which involves three varying (i.e., n-dependent) weight functions The conditions (1.34) are known as multiple orthogonality conditions of type II, see e.g. [4,63,76,82]. A RH problem for multiple orthogonal polynomials was given by Van Assche, Geronimo and Kuijlaars in [83] as an extension of the well-known RH problem for orthogonal polynomials of Fokas, Its, and Kitaev [55]. For the multiple orthogonality (1.34) the RH problem is of size 4 × 4 and it asks for Y : The RH problem has a unique solution. The first row of Y is given in terms of the biorthogonal polynomial p n,n as follows and the other rows are built out of certain polynomials of degree n − 1 in a similar way, see [66,83] for details. Multiple orthogonal polynomials have a Christoffel-Darboux formula [29] which implies that the correlation kernel (1.5) can be rewritten in the integrable form x − y for certain functions f j , g j , for j = 1, . . . , 4, and in fact it has the following representation for x, y ∈ R, in terms of the solution Y of the RH problem (1.36), see [29]. The proof of Theorem 1.4 is an involved and lengthy steepest descent analysis of the RH problem 1.36 in which the vector equilibrium problem is used in an essential way. This is similar to [45] which deals with the case α = 0. Certain complications arise because the formulas for the external field and the constraint in the vector equilibrium problem are less explicit as in the case α = 0. This not only complicates the analysis of the vector equilibrium problem in Sections 2 and 3, but it will continue to play a role via the functions θ j defined in Section 2.2 and λ j defined in Section 4.4 throughout the paper.
We also note that the analysis in [45] was restricted to the one-cut case, which leads to an underlying Riemann surface of genus 0. This restriction was removed in [75]. The problem in the higher genus case is in the construction of the global parametrix. In Section 8 we give a self-contained account that is based on the ideas developed in [75] and [67], which we think is of independent interest.
We also wish to stress that in Case IV the Riemann surface always has genus ≥ 1, even if S(µ 1 ) consists of one interval, see (4.16) below. This phenomenon did not appear for α = 0.

Singular cases
Although we do not treat the singular cases in this paper we wish to make some comments about the possible critical behaviors that we see in the twomatrix model with the quartic potential W (y) = 1 4 y 4 + α 2 y 2 . As already discussed in Section 1.5 the singular behavior is associated with either a singular behavior in the measures µ 1 , σ 2 − µ 2 , or µ 3 , or a singular behavior in the supports. The singular behavior in the measure µ 1 also appears in the one-matrix model that is described by orthogonal polynomials. It is known that the critical behavior at a singular interior point where the density vanishes quadratically is described by the Hastings-McLeod solution of the Painlevé II equation, see [19,27,79]. This Painlevé II transition is the canonical mechanism by which a gap opens up in the support in the one-matrix model.
The critical behavior at a singular endpoint where the density vanishes with exponent 5/2 is described by a special solution of the Painlevé I 2 equation (the second member of the Painlevé I hierarchy), see [28]. The critical behavior at a singular exterior point is described by Hermite functions [16,26,74] and this describes an opening of a new band of eigenvalues (birth of a cut).
We see these critical behaviors also in the two-matrix model with an even quartic W . In particular, the opening of a gap at 0 in the support of µ 1 is a Painlevé II transition. In our classification of regular cases, this is a transition from Case I to Case II, or a transition from Case IV to Case V. In the phase diagram of Figure 1.1 for V (y) = 1 2 y 2 , this transition is on the part of the parabola τ = √ α + 2, with α > −1. A Painlevé II transition also appears when either σ 2 − µ 2 or µ 3 has a density that vanishes quadratically at 0. Then 0 is a singular interior point and again a gap can open but now in the support of the measures "that are on the other sheets" and have no direct probabilistic meaning. If the density of σ 2 − µ 2 vanishes at 0 then the transition is from Case III to Case V. If the density of µ 3 vanishes at 0 then the transition is from Case I to Case IV or from Case II to Case V. In the phase diagram of Figure 1.1 the transition from Case I to Case IV takes place on the part of the parabola τ = √ α + 2, with −2 < α < −1.
The cases of singular supports represent critical phenomena that do not appear in the one-matrix model. What we called Singular Supports I in Section 1.5 corresponds to a transition from Case III to Case IV. This is a transition when the gap around 0 in the support of S(µ 1 ) closes and simultaneously the gap in the support of S(σ 2 −µ 2 ) opens up (or vice versa). On the level of the Riemann surface it means that the two branch points on the real line that are the endpoints of the gap in S(µ 1 ) come together at 0, and then split again to become a pair of complex conjugate branch points. These branch points are then on the imaginary axis and are the endpoints ±ic 2 of S(σ 2 − µ 2 ). A transition of this type does not change the genus of the Riemann surface.
This type of transition was observed first in the context of random matrices with external source and non-intersecting Brownian motions, see [3,21,25,81],i where it was described in terms of Pearcey integrals. The Pearcey transition is a second mechanism by which a gap in the support may open up (or close). As it involves three sheets of the Riemann surface it cannot take place in the one-matrix model which is connected to a two-sheeted Riemann surface.
The case of Singular Supports II gives a transition from Case II to Case III. This is a situation where the gap in the support of σ 2 − µ 2 closes and simultaneously the gap in S(µ 3 ) opens. This also typically corresponds to a Pearcey transition, but it does not involve the first sheet of the Riemann surface, which means that this transition is not visible in the eigenvalue distribution of the random matrix. In the phase diagram of Figure 1 The case of Singular Supports III represents a new critical phenomenon.
Here the supports of all three measures µ 1 , σ 2 −µ 2 and µ 3 are closed at 0. In Figure 1.1 this is the case at the multi-critical point α = −1 and τ = 1 where the Painlevé transitions and Pearcey transitions come together. One may approaches the multi-critical point from the Case III region, where there is a gap around 0 in the supports of both µ 1 and µ 3 , while the support of σ 2 − µ 2 is the full imaginary axis. At the multi-critical point the supports of µ 1 and µ 3 close simultaneously, while also the support of σ 2 − µ 2 opens up, which results in a transition from Case III to Case I.
We conjecture that the case of Singular Supports III is of similar nature as was studied very recently [1,38] for a critcal case of non-intersecting Brownian motions (or random walks) with two starting and two ending points. By fine-tuning the starting and ending points one may create a situation where two groups of non-intersecting Brownian motions fill out two ellipses which are tangent to each other at one point. Our conjecture is that the local eigenvalue behavior around 0 in the multi-critical case is the same as that for the non-intersecting Brownian motions at the point of tangency. The conjecture is supported by preliminary calculations that suggest that the local parametrix of [38] can also be used if one tries to extend the RH analysis of the present paper to the multi-critical situation.
2 Preliminaries and the proof of Lemma 1.2 Before coming to the proof of Theorem 1.1 we study the equation (1.14) in more detail. This equation will also play a role in the proof of Theorem 1.4, where in the first step of the steepest descent analysis, we will use functions defined by integrals where Γ is an unbounded contour in the complex z-plane.

Saddle point equation and functions s j
The large n asymptotics of the integrals (2.1) is determined by the solutions of the saddle point equation W (s) − τ z = 0, that is In (1.14) we considered this equation for z = x ∈ R. We defined a solution s 1 (x) for every x ∈ R, and for α < 0 and |x| < x * (α) we also defined s 2 (x) and s 3 (x). We define solution s 1 (z), s 2 (z) and s 3 (z) of (2.2) for complex z as follows. We distinguish between the two cases α > 0 and α < 0.
Case α > 0. In case α > 0 the saddle point equation (2.2) has branch points ±iy * (α) ∈ iR where y * (α) is given by (1.21). The Riemann surface S for the equation (2.2) then has three sheets that we choose as follows We already defined s 1 (x) for x ∈ R as the unique real saddle point. This function has an analytic continuation to S 1 that we also denote by s 1 . Then s 2 and s 3 are defined by analytic continuation onto S 2 and S 3 , respectively.
Case α < 0. In case α < 0 the saddle point equation (2.2) has two real branch points ±x * (α) with x * (α) given by (1.16). The three sheets of the Riemann surface S for the equation (2.2) are now chosen as follows In case α < 0, we have that s 1 (x) is defined for x ∈ R \ {0}. It is the real saddle point for which W (s) − τ xs is minimal. The function s 1 has an analytic continuation to S 1 that we also denote by s 1 . Then s 2 and s 3 are defined by analytic continuation onto S 2 and S 3 , respectively. It is a straightforward check that for x ∈ (−x * (α), x * (α)) this definition of s 2 (x) and s 3 (x) coincides with the one earlier given.
In addition we have that Proof. The symmetries (2.5) are clear. For z = x ∈ R with x > 0 we have that s 1 (x) > 0. Therefore, by continuity, Re s 1 (z) > 0 for z in a neighborhood of the positive real axis. If Re s 1 (z) = 0 for some z, so that s 1 (z) is purely imaginary, then is purely imaginary as well. The inequality Re s 1 (z) > 0 therefore extends into the full right half-plane as claimed in (2.6).
From the lemma it follows that in both cases the constraint σ 2 , see (1.19), is given by The imaginary axis is oriented upwards, so that s 1,− (z) (s 1,+ (z)) for z ∈ iR denotes the limiting value of s 1 as we approach z ∈ iR from the right (left) half-plane.
The jumps for the θ j functions from (2.10), are taken together in terms of the jumps of the diagonal matrix as follows.

Large z asymptotics
In what follows we will need the behavior of s j (z) and θ j (z) as z → ∞. Throughout the paper we define fractional exponents with a branch cut along the negative real axis. We use I, II, III and IV to denote the four quadrants of the complex z-plane. We also put ω = e 2πi/3 .
We state the following lemma without proof. It follows easily from the saddle point equation (2.2).
We have a similar result for the asymptotics of θ j . Note that the following asymptotic behaviors are consistent with the property that θ j = τ s j , see (2.9).

Two special integrals
As a final preparation for the proof of Lemma 1.2 we need the evaluation of the following two definite integrals.
Proof. Because of the formula (2.7) for σ 2 we have Since s 1 is analytic in C \ iR and s 1 (z) = O(z 1/3 ) as z → ∞, see Lemma 2.3, we can evaluate the integral using contour integration and residue calculus. It follows that and the first integral in (2.16) is proved. The second integral follows by a similar calculation, where we also use the fact that s 1 is an odd function.

Proof of Lemma 1.2
Now we come to the proof of Lemma 1.2.

Proof of part (a)
Proof. Integrating the first formula in (2.16) two times with respect to x, and using the fact that θ 1 = τ s 1 , we find that for some constants A and B, iR (log |z − x| − log |z|) dσ 2 (z) = θ 1 (x) + Ax + B, x > 0. Thus Since 0 ∈ S(σ 2 − ν 2 ) there exists a c > 0 such that S(σ 2 − ν 2 ) ⊂ iR \ (−ic, ic) and so the integral in the right-hand side of (2.17) defines a real analytic function of x. Then (2.17) proves that V 1 − U µ 2 is real analytic on R, since V is a polynomial.

Proof of part (c)
Proof. Let α < 0 and let ν 2 be as in Lemma 1.2 (c). Since ν 2 is a measure on iR we have for x ∈ R, x > 0, Hence x > 0.
Since ν 2 ≤ σ 2 and the integrand is positive for every z ∈ iR, we have where we used the second integral in (2.16).
Since θ 1 = τ s 1 we see that Since θ 1 + θ 2 + θ 3 = 1 2 α 2 , it now also follows that Recall that s 3 (z) is the solution of s 3 + αs = τ z with s 3 (0) = 0. Since α < 0 we have that s 3 is an odd function which is analytic in a neighborhood of 0. Inserting the Taylor series since all c k > 0. The two inequalities (2.19) and (2.20) give the convexity of , which completes the proof of part (c) of Lemma 1.2.

Proof of Theorem 1.1
We basically follow Section 4 of [45] where Theorem 1.1 was proved for the case α = 0. However, we need more additional results from potential theory.

Results from potential theory
We use a number of results and notions from logarithmic potential theory.
The main reference is [78]. Most results in [78] are stated for measures with compact support, while we are also dealing with measures with unbounded support, namely the real line or the imaginary axis. Therefore we need a number of results from [78] in a slightly stronger form that allows for measures with unbounded supports.
The following theorem is known as the principle of domination, and it is stated in [78,Theorem II.3.2] for the case where µ and ν have compact supports.
Theorem 3.1. Suppose µ and ν are finite Borel measures with dν ≤ dµ. Suppose that µ has finite logarithmic energy and that S(µ) = C. If for some constant c the inequality holds µ-almost everywhere, then it holds for all z ∈ C.
Proof. Let us first establish Theorem 3.1 under the assumption that µ has compact support, say S(µ) ⊂ D R = {z | |z| ≤ R}. Letν be the balayage of ν onto D R . By the properties of balayage, we then have dν = dν and for certain constant ≥ 0, Then if (3.1) holds µ-a.e., we find from the equality in (3.2) and the fact that Thus by the principle of domination for measures with compact supports, see [78, Theorem II.3.2], we find U µ ≤ Uν + c − on C, which in view of the inequality in (3.2) leads to U µ ≤ U ν + c on C, as required. We next assume that µ is as in the statement of the theorem. As . By translation and dilation invariance of the statement in Theorem 3.1 we may assume that D(0, 1) is disjoint from S(µ). We may also assume that U ν (0) < ∞.
The following result is stated for compactly supported measures in [78,Theorem IV.4.5], see also [80] where the result is attributed to de la Vallée Poussin [36].
Theorem 3.2. Let µ and ν be measures on C with dν ≤ dµ, S(µ) = C, and finite logarithmic potentials U µ and U ν . Suppose that for some c ∈ R, we have Proof. By Theorem 3.1 we obtain from (3.5) that It is enough to consider bounded Borel sets B ⊂ A. Given such a B we choose R > 0 such that |z| < R/2 for every z ∈ B. Letμ andν be the balayages of µ and ν onto the closed disk D R := {z ∈ C | |z| ≤ |R|}. By the properties of balayage we have, for certain constants 1 and 2 , It then follows from (3.6) that and again by Theorem 3.1 the inequality extends to all of C, since S(μ) ⊂ D R . Equality holds in (3.7) for z ∈ D R ∩ A, and so in particular for z ∈ B.
Then by [78,Theorem IV.4.5], we have thatν(B) ≤μ(B). Then also ν(B) ≤ µ(B) since B is contained in the interior of D R and on D R the balayage measuresμ andν differ from µ and ν only on the boundary ∂D R = {z ∈ C | |z| = R}.
We do not know if the condition S(µ) = C is necessary in Theorems 3.1 and 3.2. The condition is more than sufficient for the purposes of this paper, since we will only be dealing with measures that are supported on either the real line or the imaginary axis.

Equilibrium problem for ν 3
Given a measure ν 2 ≤ σ 2 on iR with finite logarithmic energy and dν 2 = 2/3, the equilibrium problem for ν 3 is to minimize among all measures ν on R with dν = 1/3. In case α > 0 we have V 3 ≡ 0 and then we have that the minimizer ν 3 of (3.8) is equal to whereν 2 denotes the balayage of ν 2 onto R. Then ν 3 has the density and the support of ν 3 is the full real line. This is similar to what is happening for the case α = 0 in [45,Section 4.2].
In case α < 0, the external field V 3 is positive on the interval (−x * (α), x * (α)) and zero outside. We use this fact to prove the following inequalities for ν 3 .
Then we have Proof. The variational conditions associated with the minimization problem for (3.8) are where is a constant. Letν 2 = Bal(ν 2 , R) be the balayage of ν 2 onto R.
By the principle of domination, see Theorem 3.1 (note that the total masses of 2ν 3 andν 2 are equal), we have that the inequality holds for every x ∈ C, because of the inequality in (3.12). Then by inequality (3.13), we find that equality holds. Thus and the inequality (3.10) follows because of (3.13) and Theorem 3.2.
The second inequality (3.11) follows in a similar (even simpler) way. Now we redefineν 2 as the balayage of ν 2 onto A c : Let x ∈ A c . Then x ∈ S(ν 3 ) because of (3.10), which has already been proved. Then we have, by the property of balayage and (3.12), since Thus by another application of Theorem 3.2 we find (3.11).
We note that it follows from (3.10) with c = x * (α), that For every c ≥ x * (α), we find from (3.11) and the explicit expression for the balayage onto A c , that In the case α < 0 we make use of Lemma 1.2 (c) and Lemma 3.3 to conclude that the support of the minimizer ν 3 is of the desired form. This is done in the next lemma.
Proposition 3.4. Let ν 2 be a measure on iR with dν 2 = 2/3 and ν 2 ≤ σ 2 . Assume ν 2 has finite logarithmic energy. Let ν 3 be the minimizer of (3.8) among all measures on R with total mass 1/3. Then the support of ν 3 is of the form for some c 3 ≥ 0.
If α < 0 then c 3 < x * (α), and if c 3 > 0 then the density of ν 3 vanishes as a square root at ±c 3 .
. Then the density of ν 3 has a square root singularity at ±c 3 and then it easily follows that Finally, if c 3 > 0 then the density of ν 3 vanishes as a square root at ±c 3 as a result of the convexity of The proposition is proved.

Equilibrium problem for ν 1
Given ν 2 and ν 3 the equilibrium problem for ν 1 is to minimize among all probability measures ν on R. The minimizer exists and its support is contained in an interval [−X, X] that is independent of ν 2 . This can be proved as in [45], where weighted polynomials were used. Here we give a proof using potential theory.
among probability measures on R, and suppose that for some X 1 > 0. Let ν 2 be a measure on iR, with finite potential U ν 2 and suppose that ν 1 is the minimizer for (3.19). Then also Proof. The variational conditions associated with the equilibrium problems for (3.20) and (3.19) are where˜ 1 and 1 are certain constants. Combining the relatons (3.23) and (3.24), we find is strictly increasing as |x| increases, and since S(ν 1 ) ⊂ [−X 1 , X 1 ], it follows from the first inequality in (3.25) that x ∈ S(ν 1 ).
By the principle of domination, Theorem 3.1, the inequality holds everywhere Then for x ∈ S(ν 1 ) we find by combining this with the second inequality in which implies that |x| ≤ X 1 , since x → −U ν 2 (x) is even and strictly increasing as |x| increases.

Equilibrium problem for ν 2
Given ν 1 and ν 3 on R with total masses dν 1 = 1, dν 3 = 1/3, the equilibrium problem for ν 2 is to minimize among all measures on iR with ν ≤ σ 2 and dν 2 = 2/3. Recall that by Lemma 1.2 (b) the density dσ 2 (iy)/|dz| increases as y > 0 increases. Then we can use exactly the same arguments as in Lemma 4.5 of [45] to conclude that for some c 2 ≥ 0. The proof is based on iterated balayage introduced in [64]. This proof also shows the following analogue of Lemma 3.3.
Lemma 3.6. Let ν 1 and ν 3 be measures on R with dν 1 = 1 and dν 3 = 1/3, and having finite logarithmic energy. Let ν 2 be the minimizer of (3.26) among all measures ν on iR with total mass 2/3 and ν ≤ σ 2 . Let c ≥ c 2 and put Then we have Proof. This follows from the iterated balayage. Alternatively, it could be proved from the variational conditions associated with the minimization problems, as we did for Lemma 3.3. We omit details.

Uniqueness of the minimizer
We write the energy functional (1.12) as where 2ν 1 − 3ν 2 and ν 2 − 2ν 3 are signed measures with vanishing integral. From this the uniqueness follows as in Section 4.5 of [45]. Note that there is a mistake in formula (4.21) of [45], since the coefficients of I(ν 1 ) and I(2ν 1 − 3ν 2 ) are incorrect. However, the only important issue to establish uniqueness is that the coefficients are positive.

Existence of the minimizer
After these preparations we are able to show that the minimizer exists. The proof follows along the lines of Section 4.6 of [45]. We fix p ∈ (1, 5/3). We are going to minimize the energy functional (3.32) among all measures ν 1 , ν 2 , ν 3 as before, but with the additional restrictions that for certain given X > 0 and K > 0, The constants X and K are at our disposal, and later we will choose them large enough. For any choice of X and K there is a unique vector of measures that minimizes the energy functional subject to the usual constraints as well as the additional restrictions (3.33), (3.33), (3.33). Indeed, the additional restrictions yield that the measures are restricted to a tight sets of measures. The energy functional (3.32) is strictly convex, and so there is indeed a unique minimizer.
Our strategy of proof is now to show that for large enough X and K the additional restrictions are not effective.
Restriction (3.33) This is easy to do for (3.33). Indeed because of Lemma 3.22 it suffices to choose X > X 1 where X 1 is as in the lemma. Then it is easy to see that (3.33) provides no extra restriction.
Restriction (3.35) The assumption p ∈ (1, 5/3) ensures that We take X > X 1 such that Then X > c and also Assume that ν 2 is a measure on iR, symmetric around the origin with dν 2 and satisfying the restriction (3.34) as well as ν 2 ≤ σ 2 . Let ν 3 be the minimizer of (3.8) among measures ν on R with dν = 1/3. Note that we do not impose the restriction (3.35).
Since c ≥ x * (α) we have the inequality (3.15) which due to the symmetry of ν 2 may be written as Let x ≥ X > c. We split the integral in (3.40) into an integral from 0 to iX and from iX to i∞, and we estimate using (3.38) and using (3.38) and (3.39) 1 π where the last inequality holds since ν 2 satisfies (3.34). This leads to In total we get Since (1 + ε p ) 2 C p < 1 and p < 5/3 < 2, it is possible to take now K sufficiently large, say K ≥ K 3 , such that Then the additional restriction (3.35) is satisfied.
Restriction (3.34) A similar argument shows the following. Choose ν 1 and ν 3 on R with dν 1 = 1, dν 3 = 1/3 and satisfying the additional restrictions (3.33) and (3.35). Let ν 2 be the minimizer for (3.26) for ν 2 on iR with dν 2 = 2/3 and ν 2 ≤ σ 2 . However, we do not impose (3.34). Then with the same choice for X > X 1 as above (so that (3.38) holds), we will find that for K large enough, say K ≥ K 2 we have that That is, the restriction (3.34) is satisfied. This completes the proof of existence of the minimizer for the vector equilibrium problem.

Proof of Theorem 1.1
After all this work the proof of Theorem 1.1 is short.
The measure µ 1 is the equilibrium measure in external field V 1 − U µ 2 , which is real analytic on R \ {0}. If 0 ∈ S(µ 1 ), then this implies by a result of Deift, Kriecherbauer and McLaughlin [32] that S(µ 1 ) is a finite union of intervals with a density that has the form (1.25). If 0 ∈ S(σ 2 − µ 2 ) then V 1 − U µ 2 is real analytic on R (also at 0) by Lemma 1.2 (b), and again by [32] we find that S(µ 1 ) is a finite union of intervals with a density (1.25). The conditions (1.23) are the Euler-Lagrange conditions associated with the minimization in external field, and they are valid in all cases. This proves part (a).
The statements (1.27) about the supports of µ 2 and σ 2 − µ 2 were already proved in Section 3.4, see (3.27) and (3.28). The Euler-Lagrange conditions (1.26) also follow from this. The fact that ρ 2 vanishes as a square root at ±ic 2 in case c 2 > 0 follows as in the proof of Lemma 4.5 of [45]. The other statements in part (b) are obvious.
Part (c) follows from Proposition 3.4, see also (3.18). This completes the proof of Theorem 1.1.

A Riemann surface
The rest of the paper is aimed at the proof of Theorem 1.4. We assume from now on that (V, W, τ ) is regular, which means in particular that S(µ 1 ) and S(σ 2 − µ 2 ) are disjoint. Then by part (a) of Theorem 1.1 we have that S(µ 1 ) consists of a finite union of disjoint intervals. We use this structure as well as that of the supports of the other measures to build a Riemann surface in this section.
We start by collecting consequences of the vector equilibrium problem in the form of properties of the g-functions. These will be used in the construction of a meromorphic function ξ on the Riemann surface.

The g-functions
The Euler-Lagrange variational conditions (1.23), (1.26), (1.29), can be rewritten in terms of the g-functions that are defined as follows.
Definition 4.1. For j = 1, 2, 3 we define g j by the formula (4.1) with the following choice of branch for log(z − s) with s ∈ S(µ j ).
(a) For j = 1, 3, we define log(z − s) for s ∈ R with a branch cut along (−∞, s] on the real line.
, which is partly on the real line and partly on the imaginary axis.
In all cases the definition of log(z − s) is such that As a result we have that g 1 is defined and analytic on In what follows we will frequently use the numbers α k given in the following definition.
The above definitions are such that the following hold.
In particular with α k as in (4.2). Here we have put b 0 = −∞ and a N +1 = +∞.
(b) We have Proof. Parts (a) and (d) are immediate from (4.1).
Part (b) follows from the definition of g 2 , the symmetry of the measure µ 2 on iR, and the fact that dµ 2 = 2/3. For part (c), we note that for z ∈ iR + , such that and the first equation in (4.6) follows. The second equation follows in a similar way.
We have the following jump properties.
For part (b) we note that Also because of symmetry of g 1 and g 3 around 0, Using this in (1.26) we obtain (4.11). Part (c) follows from (1.29) in the same way that we obtained part (a) from (1.23).
In what follows we also use the derivatives of the g-function, which we denote by F 1 , F 2 , F 3 .
which is defined and analytic for z ∈ C \ S(µ j ).
The jump of F j gives us the density of µ j , since we have (4.14)

Riemann surface R and ξ-functions
We construct a four sheeted Riemann surface R in the following way. Four sheets R j defined as are connected as follows: . Every connection is in the usual crosswise manner. We compactify the Riemann surface by adding a point at infinity to the first sheet R 1 , and a second point at infinity which is common to the other three sheets. The genus of the Riemann surface is (recall that S(µ 1 ) consists of N intervals and recall the classification of the cases in Section 1.5) in Cases I, II, and III, N, in Cases IV and V.
Using the functions F j defined in Definition 4.5 we define the ξ functions.
Proof. On (−ic 2 , ic 2 ) we have by (4.6) and (4.13) that . Since τ s 1 = θ 1 , we get and also since F 1 is analytic on the imaginary axis. This proves part (a). We also have Then above argument also shows that since F 1 and F 3 are analytic on the imaginary axis. If c 3 > 0 (which can only happen if α < 0), then F 3 is analytic across (−c 3 , c 3 ). Both ξ 2 and ξ 3 are analytic across (−x * (α), x * (α)), and so a fortiori across (−c 3 , c 3 ), since c 3 < x * (α). This proves part (c) and the remaining statement of part (b).
We continue to denote the analytic extension by ξ j , j = 2, 3, 4. R j → C given by ξ(z) = ξ j (z) for z ∈ R j extends to a meromorphic function (also denoted by ξ) on R. The meromorphic function has a pole of order deg V −1 at infinity on the first sheet, and a simple pole at the other point at infinity.
As z → ∞, we have by (4.17) and (4.13) that which implies that ξ has a pole of order deg V − 1 at the point at infinity on the first sheet. From (4.17), (4.13) and Lemma 2.4 it also follows that for j = 2, 3, 4, where the value of k depends on j and on the quadrant in which z → ∞.
Since the other point at infinity (which is common to the second, third and fourth sheets) is a double branch point, we have that z −1/3 is a local coordinate, so that ξ indeed has a simple pole.
Remark 4.9. It follows from Proposition 4.8 that ξ j , j = 1, 2, 3, 4 are solutions of a quartic equation with coefficients that are polynomial in z. This algebraic equation is known as the spectral curve [13]. The degrees of the polynomial coefficients are determined by the degree of V .
Using this fact, we indicate how to remove the condition that S(µ 1 ) and S(σ 2 − µ 2 ) are disjoint in part (a) of Theorem 1.1. This condition was included in order to be able to conclude that S(µ 1 ) is a finite union of intervals. In case the condition does not hold, we can now argue as follows.
Given ε > 0 we modify the vector equilibrium problem by requiring that . Then the proof of existence and uniqueness of the minimizer follows in the same way as in Section 3. Let us denote the minimizer by (µ ε 1 , µ ε 2 , µ ε 3 ). Because the external field V 1 −U µ ε 2 is real analytic on R \ (−ε, ε) we have that the support of µ ε 1 is a finite union of intervals. The further structure of the minimizers is the same, which means that we can construct a Riemann surface with a globally meromorphic function on it, in the same way as we did in this section. The only difference is that the meromorphic function may have a pole in ±ε. The algebraic equation (4.25) has coefficients that are rational in z, with ±ε as the only (simple) poles, and so we may write the spectral curve as with polynomial coefficients q j (z), whose degrees is determined by the degree of V . In particular, the degrees do not depend on ε.
As ε → 0 we have that µ ε j converges weakly to µ j for j = 1, 2, 3, and it is not difficult to show that the spectral curve (4.26) has a limit as well.
z−s is the solution of an algebraic equation and therefore the support of µ 1 is a finite union of intervals.

Properties of the ξ functions
From (4.14) and the definition (4.17) we find that (4.27) Since ξ is an odd function with a simple pole at infinity on sheets 2, 3 and 4, there is an expansion in the local coordinate z −1/3 : with certain real constants c j . Keeping track of the principal branches, we obtain the following for the asymptotic behavior of ξ 2 , ξ 3 and ξ 4 as z → ∞.
Note that the structure of the formulas in terms of the factors ω and ω 2 is the same as in Lemma 2.3 and Lemma 2.4.
Lemma 4.10. We have as z → ∞ with constants Proof. The form of the asymptotics follows from the definition (4.17) snd Lemmas 2.3 and 2.4. Since µ 1 is a probability measure with compact support, and symmetric with respect to the origin, we have we see that F 2 has a series representation around z = ∞ in powers of z 1/3 . It should start with F 2 (z) = 2 3 z −1 + · · · since µ 2 has total mass 2/3, see (4.13). Then the coefficients for z 1/3 and z −1/3 should vanish which by Lemma 2.3, and (4.30) leads to the expressions in (4.28) for c −1 and c 1 . The coefficient of z −1 should be 2/3, which by (4.29) and (4.30) leads to c 3 = 1/3. Combining (4.29) and (4.30) with Lemmas 2.3 and 4.10 we also find that there is a real constant as z → ∞. Then by (4.17), and Lemmas 2.4 and 4.10, we also get as z → ∞, with the same constant C. Using this and Lemma 2.4 in (4.27) we find that (4.34) which gives the precise rate of decay of the densities of µ 2 and µ 3 along the imaginary and real axis, respectively. It also follows from (4.34) that the constant is positive, C > 0. Furthermore, after integration, we find from (4.29), (4.32), and (4.33) as z → ∞. It may be verified from (4.1) that the constant of integration indeed vanishes.

The λ functions
The λ functions are primitive functions of the ξ functions (that is, Abelian integrals). They are defined as follows.
Definition 4.11. We define It is convenient to consider each λ j function as defined on R j with an extra cut to ensure single-valuedness. Thus, λ 1 is defined and analytic on The λ-functions satisfy the following jump conditions that follow by combining the jumps for the g-functions given in and for k = 0, . . . , N , where b 0 = −∞, a N +1 = +∞, and α k is given by (4.2) for k = 0, . . . , N .
(b) On the imaginary axis we have (c) Finally, and (4.42) Proof. The equations follow by combining the jumps for the g-functions given in Lemmas 4.3 and 4.4 with the properties (2.10) of the functions θ j . Only (4.40) requires more explanation. From (4.6) and (4.35) we get for z ∈ (0, ic 2 ), where we used the fact that g 1 and g 3 are analytic on the imaginary axis.
Since σ 2 has density (2.7) and θ 1 = τ s 1 , we have The λ functions also satisfy a number of inequalities.
Lemma 4.13. We have the following inequalities Proof. The inequalities follow from the inequalities in Lemma 4.4.
For example, to obtain (4.43) we note that by (4.35) and (2.14) which indeed has a negative real part on R \ S(µ 1 ) because of the inequality in (4.10). The inequality is strict because of the regularity assumption. The other inequalities (4.44) and (4.45) follow in a similar way.
The asymptotic behavior of the λ-functions follows by combining the definition (4.35), which we state for ease of future reference.
Lemma 4.14. We have as z → ∞

49)
where C is the constant from (4.31).

Pearcey integrals and the first transformation of the RH problem
Now we come to the first transformation of the RH problem (1.36) which as in [45] will be done with the help of Pearcey integrals. The present setup is however slightly different from the one in [45], since we will construct a RH problem for the Pearcey integrals that is n-dependent.

Definitions
We note that the weights from (1.35) can be written as is a Pearcey integral that satisfies the third order linear ODE Other solutions to the same ODE are given by similar integrals.
Definition 5.1. For j = 0, . . . , 5 and n ∈ N, we define where the contours Γ j are or homotopic deformations such as the ones shown in Figure 5.1, and with the orientation as also shown in Figure 5.1.
All Pearcey integrals (5.4) are entire functions in the complex plane. Certain combinations of these functions are used to build a 3 × 3 matrix valued P n : C \ (R ∪ iR) → C 3×3 as follows.
Definition 5.2. In each of the four quadrants (denoted I, II, III and IV ) we define   This relation implies the jump relation (5.7) for z ∈ R + in view of the definition of P n in the first and fourth quadrant given in (5.6).
The other jumps are proved in a similar way.

Large z asymptotics
The large z behavior of P n (z) is obtained from a classical saddle point analysis of the integrals (5.4) that defines each of its entries. Recall that the saddle point equation (2.2) has the three solutions s 1 (z), s 2 (z) and s 3 (z), see Section 2.1. The value of −W (s) + τ zs at the saddle s j (z) is denoted by θ j (z), as in Section 2.2.
Lemma 5.4. We have as z → ∞, where Θ is defined as in (2.11).
Proof. The proof is a tedious saddle point analysis for all integrals that define P n in the respective quadrants, see also [21]. The ODE (5.3) can be used to find the form of the asymptotic expansion. Indeed, putting p = e nF in (5.3) we obtain the following differential equation for f = F , as z → ∞ in one of the quadrants. Here ω = e 2πi/3 , as before. Thus after integration and by Proposition 2.4, we have for a certain j = 1, 2, 3, where C is a constant, and so there are solutions of the Pearcey ODE (5.3) that behave like as z → ∞ in one of the quadrants. Each of the functions p 0,n , . . . , p 5,n that appears in the definition of P n in a certain quadrant has an asymptotic expansion as in (5.10). We have to associate with each such function a value of j and the corresponding value of k. To do this, we have to perform a saddle point analysis on the integral representation (5.4).
The saddle point equation . It turns out that in each quadrant and for each j = 1, 2, 3, we have that s j (z) is the relevant saddle for the Pearcey integral that defines the function in the jth column. Thus the initial contour can be deformed into the steepest descent path through s j (z), or into a union of steepest descent paths with s j (z) as the determining saddle. Then by a classical saddle point analysis, see e.g. [73], we obtain the following asymptotic behavior as z → ∞. Since W (s) = 3s 2 + α, and s j (z) = O(z 1/3 ) by Lemma 2.3, we find from (5.12) the behavior which then by (5.10) takes the form for a certain value of k, depending on j and depending on the particular quadrant. In the first quadrant, for example, we have k = j − 1 as follows from Lemma 2.3. We may differentiate the expansions and we obtain for the derivative and for the second derivative In each quadrant we have to take the correct sign (determined by the orientation of the contours) and the correct value of k. The formulas can then be written in the form (5.9).
Let us define the constant matrices A j as follows.
Definition 5.5. We define The prefactor i √ 3 is chosen such that det A j = 1 for j = 1, 2, 3, 4. Then we can reformulate Lemma 5.4 as follows.
Corollary 5.6. We have as z → ∞ in the jth quadrant, where P n,0 is the invertible matrix The asymptotic formula (5.15) will be the most convenient to work with in what follows. Note that we still have an error term I + O(z −2/3 ) which is somewhat remarkable.

First transformation: Y → X
With the 3×3 matrix-valued P n we can now perform the first transformation of the Riemann-Hilbert problem. Recall that Y is the solution of the RH problem (1.36).
Definition 5.7. We define X by for z in the jth quadrant. Here C n and D n are the constant matrices with P n,0 given by (5.16), and P n is given by (5.6), and Θ is given by (2.11).
The matrices in the right-hand side of (5.17) are 4×4 matrices written in block form, where the right lower block has size 3 × 3. The transformation (5.17) does not affect the (1, 1) entry. The factor e nΘ (z) is included in the definition of X(z) in order to simplify the asymptotic behavior of X. However, it will complicate the jump matrices, as we will see.
The matrix valued function X is defined and analytic in each of the four quadrants.
The asymptotic behavior for X is as follows.
Lemma 5.8. We have as z → ∞ in the jth quadrant with matrices where the matrices A j are given by (5.13) and (5.14).
As in [45] we have that the asymptotic formula (5.19) for X is not uniform up to the axis.
Proof. By Corollary 5.6 and the definitions (5.18) of C n and D n we have as z → ∞ in the jth quadrant. Using this in (5.17), together with the asymptotics of Y as given in (1.36), we obtain the lemma.
The jumps for X on the real and imaginary axis are as follows.
Lemma 5.9. We have where the jump matrices J X are given as follows.
• On the real line we have for

RH problem for X
To summarize, we have found the following RH problem for X where J X is given by (5.20)-(5.23).
Each of the jump matrices J X is nontrivial only in certain 2 × 2 blocks. The nontrivial blocks are triangular and assume one of the forms 1 * 0 1 or 1 0 * 1 with a real off-diagonal entry * , or * 1 0 * or * 0 1 * with oscillatory diagonal entries of absolute value 1. The first form indicates that an external field is acting and the second form indicates the presence of an upper constraint. In this way we can already see the connection with the vector equilibrium. Let us examine this in more detail.
Jump J X on the real line The jump matrix J X on the real line, see (5.20) and (5.21), takes the block form where (J X ) 1 and (J X ) 3 are 2 × 2 matrices. We have is indeed the external field that acts on the first measure in the vector equilibrium problem, see (2.14). Furthermore, we have by (5.21), which by (2.15) is indeed the non-zero part of the external field V 3 that acts on the third measure in the vector equilibrium problem. The external field V 3 plays a role only in case x * (α) > 0, that is, in case α < 0. The right lower block in (5.20) has oscillatory diagonal entries. We define ψ 3 by Then ψ 3,± is purely imaginary for |x| > x * (α) with = τ (s 2,+ (x) − s 2,− (x)) = 2iτ Im s 2,+ (x), which is purely imaginary with positive imaginary part. Thus we can associate with ψ 3 a measure σ 3 on R by putting Because of the upper triangular form of (5.32) it will turn out that σ 3 acts as a lower constraint on the third measure in the sense that (5.34) µ 3 + σ 3 ≥ 0 and then we could allow signed measures µ 3 in the vector equilibrium problem. However, in this more general vector equilibrium problem we would still find µ 3 ≥ 0 so that the constraint (5.34) does not play a role after all. We will not use the measure σ 3 anymore.
6 Second transformation X → U

Definition of second transformation
The second transformation of the RH problem uses the functions that come from the vector equilibrium problem. It is possible to state the transformation in terms of either the g-functions, or the λ-functions. Definition 6.1. We define the 4 × 4 matrix valued function U by where C is the constant from (4.31), G is given by and L is a constant diagonal matrix Note that the equality of the two diagonal matrices in (6.2) follows from the definition (4.35) of the λ-functions.
Then U is defined and analytic in C \ (R ∪ iR).
6.2 Asymptotic behavior of U Lemma 6.2. We have Proof. We have because of the asymptotic behavior of the λ functions that e −nG(z) = diag z −n e −n 1 z n/3 z n/3 z n/3 Then by the asymptotic behavior of X, and the definition of U , we get as z → ∞ in the j th quadrant. We can move the O(z −1 ) to the front, but then the O(z −2/3 ) reduces to O(z −1/3 ): We also want to move the z −2/3 term to the left. Then we pick up an O(1) contribution in the (2, 4) entry. Indeed we have for every j. The lemma follows.
with jump matrix which is indeed what we expect.

Jump matrices for U
The jump U + = U − J U with jump matrix J U takes a different form on the various parts of the real and imaginary axis. Since G and L are diagonal matrices, the jump matrix J U has the same block structure as J X . In terms of the λ functions the jumps take on a very nice form.
The expressions in (6.9)-(6.11) are valid over the full (real or imaginary) axis. Observe in particular that the two expressions (5.36) and (5.37) for (J X ) 2 both lead to (6.10), and the two expressions (5.30) and (5.32) for (J X ) 3 both lead to (6.11). Hence the special roles that ±x * (α) (in case α < 0) and ±iy * (α) (in case α > 0) played in the jump matrix J X for X have disappeared in the jump matrix J U for U .
The parts (J U ) k in the jump matrix J U have different expressions in the various parts of the real and imaginary axis. This follows from (6.9)-(6.11) and the jump properties of the λ-functions as given in Lemma 4.12. Also recall that n is a multiple of three. a k+1 ), for k = 0, 1, . . . , N.
In particular, note that on (−c 3 , c 3 ) (which can only be non-empty if α < 0), we have by (4.42)

Opening of lenses
The next step in the steepest descent analysis is the opening of lenses around S(µ 1 ), S(σ 2 − µ 2 ) and S(µ 3 ). This will be done in the third and fourth transformations U → T and T → S.

Third transformation U → T
In the third transformation we open the lenses around S(σ 2 − µ 2 ) and S(µ 3 ) that will be denoted by L 2 and L 3 , respectively. The lenses will be closed unbounded sets that do not intersect. There are three situations, depending on whether c 2 and c 3 are positive or zero. We recall that by regularity we can not have c 2 = c 3 = 0. The three different cases differ in the shapes of the lens, which is due to the different supports of S(σ 2 − µ 2 ) and S(µ 3 ) as illustrated in Figures 7.1, 7.2 and 7.3.
• In the Cases IV and V we have that both c 2 and c 3 are positive and we choose the lips of the lenses such that they have ±ic 2 and ±c 3 as endpoints, as shown in Figure 7.1.
• In Case III we have c 2 = 0 and c 3 > 0, and now we open the lens such that the lips around S(σ 2 − µ 2 ) = iR stay away from the imaginary axis and intersecting the real line at two points ±γ 3 as in Figure 7.2. We recall that in Case III we have 0 / ∈ S(µ 1 ) and hence the number N of intervals in S(µ 1 ) is even. We then choose γ 3 such that in Case III. (7.1) The lens around S(µ 3 ) is as in the Cases IV and V.
• In the cases I and II we have c 2 > 0 and c 3 = 0. We then take the lens around S(σ 2 − µ 2 ) as in the Cases IV and V above, but we choose the lips of the lens around S(µ 3 ) = R such that it is away from the real axis and intersects the imaginary axis at two points ±iγ 2 with 0 < γ 2 < c 2 , in Cases I and II. (7.2) We choose all lenses to be symmetric with respect to both the real and imaginary axes.
Note that we have γ 3 in Case III and γ 2 in Cases I and II. For ease of presentation we also define There are further requirements on the lenses that are important for the steepest descent analysis. These are formulated in the next two lemmas.
Using (4.9), (2.7) and (2.9) we can further rewrite this to which is purely imaginary. Then which is positive for y ∈ R in Case III, and for |y| > c 2 > 0 in the other cases.
The first statement of the lemma then follows from the Cauchy Riemann equations since Re(λ 3 − λ 2 ) ± = 0 on the imaginary axis. The second statement follows then from the asymptotic behavior of λ 2 and λ 3 as given in Proof. The proof is similar to the proof of Lemma 7.1.
Using (4.35) we obtain and hence λ 3,± − λ 4,± = −g 3,+ − g 3,− + g 2,± + θ 2,± − θ 3,± ± (g 3,− − g 3,+ ). Now by (2.15) and (5.33) Combining this with (4.12) and (4.7) leads to which is purely imaginary. We find and this is positive for x ∈ R in Cases I and II, and for |x| > c 3 > 0 in the other cases. As in Lemma 7.1 the first statement of the lemma now follows by the Cauchy Riemann equations. The second statement follows by (4.48), (4.49), and Lemma 2.4. Now that we have defined the lenses L 2 and L 3 we can come to the actual definition of the transformation U → T . The transformation is based on the following factorization of (J U ) 2 on S(σ 2 − µ 2 ) as given by (6.16) (recall that and the factorization of (J U ) 3 on S(µ 3 ) as given by (6.17) (recall that λ 3,± = λ 4,∓ or λ 3,± = λ 4,∓ ± 2 3 πi and n is a multiple of three) This leads to the following definition of T . Definition 7.3. We define the 4 × 4 matrix valued function T by in the left part of the lens around S(σ 2 − µ 2 ), in the right part of the lens around S(σ 2 − µ 2 ). Then T is defined and analytic in C \ Σ T where Σ T is the contour consisting of the real and imaginary axes and the lips of the lenses around S(σ 2 − µ 2 ) and S(µ 3 ).

RH problem for T
Now T solves the following RH problem with certain jump matrices J T that will be described in the next subsection. The matrices A j are given in (5.13)-(5.14). The contour Σ T and some of the jump matrices J T are shown in Figures 7.1, 7.2 and 7.3.
Since the lenses around S(σ 2 − µ 2 ) and S(µ 3 ) are unbounded, we have to be careful about the asymptotic behavior of T (z) as z → ∞, since it could be different from the asymptotic behavior of U (z). However, a simple check using (4.47)-(4.49), (7.6) and (7.7) shows that the asymptotic behavior is the same.
However, it is good to note the following. The asymptotic behavior of the Pearcey functions given in (5.9) is not uniform up to the real and imaginary axes. By following the transformations Y → X → U we see that the same is true for the asymptotic behavior of U . It requires an independent check that after opening of the unbounded lenses the asymptotic behavior of T is in fact uniform in each of the quadrants. This phenomenon also appeared in [45].

Jump matrices for T
Our next task is to compute the jump matrices J T in the Riemann-Hilbert problem (7.9) for T . The definitions (7.6) and (7.7) of T and the structure of the jump matrix J U as given in (6.7) and (6.8) yield that where J T has again the structure on R \ (−γ 3 , γ 3 ) and on the lips of the lens around S(µ 3 ), and on the lips of the lens around S(σ 2 − µ 2 ), (7.10) with 2 × 2 blocks (J T ) k for k = 1, 2, 3. The block structure as in (7.10) is not valid on the intervals (−γ 3 , γ 3 ) and (−iγ 2 , iγ 2 ) (if non-empty). On these intervals the block structure changes to (7.11) with some non-zero entries that are denoted by * , see (7.15) and (7.16) below.
The diagonal blocks are given in the next lemma. where (J U ) 1 is given by (6.15).
In order to compute the jump matrix J T on (−γ 3 , γ 3 ), we have to note that by the regularity assumption we have that 0 / ∈ S(µ 1 )∪S(µ 3 ), and by the choice of γ 3 in (7.1) we have that [−γ 3 , γ 3 ] is disjoint from S(µ 1 ) ∩ S(µ 3 ). Also, the measure µ 1 is symmetric so that α N/2 = 1/2. Hence the jump matrix J U given in (6.7), (6.15) and (6.17) takes the form From (7.6) we obtain After some calculations using (4.37), (4.38), (4.42) and the fact that n is a multiple of 3 we obtain from the last two expressions that Cases I and II: γ 2 > 0. In Cases I and II we have c 3 = 0 and the lips of the lens around S(µ 3 ) intersect the imaginary axis at ±iγ 2 with γ 2 > 0. By (6.8) and (6.16) we have Combining this with (7.7) and using (4.40), the fact that λ 4 is analytic on iR \ {0} and the fact that n is a multiple of 3 leads to

Fourth transformation T → S
In the fourth transformation we open up a lens around each interval [a k , b k ] of S(µ 1 ). The union of these lenses will be denoted by L 1 and L 1 is a closed bounded set. This is done in a standard way based on the factorization (7.17) Lemma 7.5. We can and do choose the lenses L 1 around S(µ 1 ) such that Proof. By (4.35) we obtain By (4.10), (2.14) and (4.3) this leads to Thus ±(λ 1 − λ 2 ) ± is purely imaginary and which is strictly positive on each of the intervals (a k , b k ) by the regularity assumption on µ 1 . The lemma follows by the Cauchy Riemann equations.
In addition to the condition described in the lemma we also make sure that the lenses around S(µ 1 ) do not intersect the lens around S(σ 2 −µ 2 ) and the lips of the lens around S(µ 3 ), except for the case when ±c 3 ∈ S(µ 1 ). In that case we choose the lips around the interval(s) containing ±c 3 in such a way that they intersect the lips of the lens around S(µ 3 ) exactly once in each quadrant as it is shown in Figure 7 If 0 ∈ S(µ 1 ) then we put γ 2 = 0.
We also take the lenses so that they are symmetric in both the real and imaginary axis.
We now define S as follows.
Definition 7.6. We define in the upper part of the lenses around S(µ 1 ), in the lower part of the lenses around S(µ 1 ), Then S is defined and analytic in C\Σ S where Σ S is the contour consisting of the real and imaginary axis and the lips of the lenses around S(µ 1 ), S(σ 2 − µ 2 ) and S(µ 3 ).

RH problem for S
The asymptotic behavior as z → ∞ clearly has not changed, and so S satisfies the following RH problem.
with jump matrices J S that are described next.
The jump matrices J S have again the block structure On these intervals we have with possible non-zero entries that are denoted by * .
The diagonal blocks are given in the following lemma.
We next give the jump matrices J S on the intervals (−γ 3 , γ 3 ), (−iγ 2 , iγ 2 ) and (−i γ 2 , i γ 2 ) and in particular the off-diagonal entries that were denoted by * in (7.22). They depend on the Cases I-V. We simply present the formulas without further comment. Of course they follow from the jump matrices J T and the tranformation (7.18)- (7.19). Note that Case I. In Case I we have 0 ∈ S(µ 1 ), c 2 > 0 and c 3 = 0 so that γ 2 > 0 and γ 2 > 0 and γ 3 = 0. By construction γ 2 < γ 2 and we find in the Case I where J T is given by (7.16).

Behavior of jumps as n → ∞
Having collected all the jump matrices J S we may study their behavior as n → ∞. It turns out that all off-diagonal entries of the form ±e n(λ j −λ k ) are such that Re(λ j − λ k ) < 0 and therefore they are exponentially decaying as n → ∞. For (J S ) 1 in (7.23) we have off diagonal entries e n(λ 2,+ −λ 1,− ) on R \ S(µ 1 ), e n(λ 1 −λ 2 ) on the lips of the lenses around S(µ 1 ), (7.30) which are indeed exponentially decaying because of (4.43) and Lemma 7.5.
For (J S ) 2 in (7.24) we have off diagonal entries on the lips of the lens L 2 around S(σ 2 − µ 2 ), (7.31) which are exponentially decaying because of (4.44) and Lemma 7.1.
Since we are in Case I, the inequalities from (4.44) and Lemmas 7.2 and 7.5 apply on [−i γ 2 , i γ 2 ] since this interval is contained in iR \ S(σ 2 − µ 2 ) and it belongs to the two lenses L 1 and L 3 . So we have . In fact, equality in the first and third inequalities of (7.34) holds only at 0. Then indeed all entries in (7.33) are exponentially decaying as n → ∞, uniformly on [−i γ 2 , i γ 2 ].
In the next step of the steepest descent analysis we will ignore all exponentially small entries in the jump matrices J S . This will lead to matrices J M that we use as jump matrices for the so-called global parametrix. The matrices J M are given in (8.3)-(8.6) below.

Global parametrix
If we ignore all entries in the jump matrices J S that are exponentially small as n → ∞, we find the following model Riemann-Hilbert problem for M : where J M is given as follows. On the real line, the jump matrix has the block form On the imaginary axis the jump matrix has the block form Note that we have strenghtened the O term in the asymptotic condition in (8.1) The solution has fourth root singularities at all branch points a k , b k , for k = 1, . . . , N , and at ±c 2 , ±c 3 if c 2 , c 3 > 0. We give more details on the construction in the rest of this section.

Riemann surface as an M -curve
We follow the approach of [67] in using meromorphic differentials on the Riemann surface as the main ingredient in the construction of the global parametrix. Recall that R is a four sheeted cover of the Riemann sphere. We use π : R → C to denote the canonical projection, and we let π j be its restriction to the jth sheet. Then for z ∈ C we have that denotes the point on the jth sheet that projects onto z. It is well-defined for z ∈ C \ (R ∪ iR). The Riemann surface R has the structure of an M -curve. It has an anti-holomorphic involution φ : R → R : P →P withP on the same sheet as P . The fixed point set of φ consists of g + 1 connected components (g is the genus of R) {P ∈ R : φ(P ) = P } = Σ 0 ∪ Σ 1 ∪ · · · ∪ Σ g where • Σ 0 contains ∞ 1 and consists of the intervals (−∞, a 1 ] and [b N , ∞) on the first and second sheets.
• Σ i for i = 1, . . . , N − 1 contains the intervals [b i , a i+1 ] on the first and second sheets.
A full description of the curves depends on the case we are in. Recall that there are five cases.
Cases I and II: In Cases I and II we have c 3 = 0 and the genus is N − 1.
In these cases we have only the above curves Σ 0 , . . . , Σ N −1 and they are on the first and second sheets only.
Case III: In Case III we still have g = N − 1. Since 0 ∈ supp(µ 1 ) the number N is even (by symmetry We need the following result on non-special divisors on M -curves.
Recall that the divisor g j=1 P j is non-special if and only if there are no non-constant analytic functions on R \ {P 1 , . . . , P g } with only simple poles at the points P 1 , . . . , P g .

Canonical homology basis
The Riemann surface has a canonical homology basis {A 1 , . . . , A g ; B 1 , . . . , B g } that we choose such that the cycle A j is homologous to Σ j but disjoint from In Case II there is an A-cycle that intersects with the imaginary axis. We make sure that on the second sheet it does so in the interval (−ic 2 , 0).
In Case III we have c 2 = 0 and c 3 > 0. The genus of R is g = N − 1. Note that in this case, the number of intervals in S(µ 1 ) is even and the origin does not belong to S(µ 1 ). The canonical homology basis is chosen as where the symbol ∼ means that φ(A j ) is homologous to A j in R.

Meromorphic differentials
Let us now recall some facts about meromorphic differentials on the Riemann surface. Most of the results that we will be using can be found in [53]. A meromorphic differential with simple poles only is called a meromorphic differential of the third kind. A meromorphic differential of the third kind is uniquely determined by its A-periods and the location and residues at its poles, provided that the residues add up to zero.
We pick points P j , j = 1, . . . , g with P j ∈ Σ j . For each such choice we define a meromorphic differential ω P of the third kind as follows.
We use ∞ 1 to denote the point at infinity on the first sheet and ∞ 2 to denote the other point at infinity which is common to all three other sheets.
Definition 8.2. Let P j ∈ Σ j for j = 1, . . . , g. Then ω P is the meromorphic differential of third kind on R which is uniquely determined by the following conditions (a) The meromorphic differential has simple poles at a j , b j , j = 1, . . . , N , at ±ic 2 (if c 2 > 0), at ±c 3 (if c 3 > 0), at the points P j , j = 1, . . . , g and at ∞ 2 . The residues at the finite branch points are equal to −1/2: (b) The meromorphic differential has vanishing A periods in Cases I, II, IV, and V: A k ω P = 0, k = 1, . . . , g, (in Cases I, II, IV, and V). (8.11) In Case III all A-periods are vanishing, except the one of A N/2 : (only in Case III). (8.12) A simple count shows that the residues of ω P add up to 0 and therefore the meromorphic differential ω P is indeed uniquely defined by the pole conditions and the A-period conditions.
If one or more of the P j 's coincide with a branch point, then the residue conditions (8.8) have to be modified appropriately. For example, if P j = a j then Res In this way, the meromorphic differential ω P depends continuously on the P j 's and is well defined for each choice of P j ∈ Γ j , j = 1, . . . , g.
The anti-holomorphic involution φ is used to map a meromorphic differential ω to a meromorphic differential φ # (ω) in an obvious way. If ω is equal to f j (z)dz for a meromorphic function f j on sheet j, then φ # (ω) is equal to f j (z)dz on sheet j. A crucial property is that ω P is invariant under φ # . Then Proof. Since all the poles of ω P are invariant under the involution φ, the meromorphic differential φ # (ω P ) has the same poles and residues as ω P .
We have to show that their A-periods are the same. We have We have that φ(A k ) is homologous to A k in R, but in the process of deforming φ(A k ) to A k we pick up residue contributions from the poles of ω P .
For the cycles A k that are only on two sheets (which is the typical situation) we pick up a residue contribution from the two endpoints of a gap in the support of either µ 1 or µ 3 and from P k . As the deformation from φ(A k ) to A k will result in clockwise loops around these points, the total residue is since the A-period of ω P is zero for A-cycles that are only on two sheets. In Case III there is a cycle A N/2 which is on all four sheets. If we deform φ(A N/2 ) into A N/2 we pick up residue contributions from the endpoints b N/2 , a N/2+1 , −c 3 , c 3 and from P N/2 . Then we have residue 1 2 four times and −1 once, so that the total residue contribution is 1 (As the residue contribution comes from clockwise loops around these points). It follows that since the A N/2 period is −πi by definition (8.12). So the A-periods of φ # (ω P ) and ω P agree, and the lemma follows.
Proof. This follows as in [67]. The fact that the map is well-defined and continuous is proved as in [67,Proposition 2.3]. Due to the fact that the divisors g j=1 P j with P j ∈ Σ j are non-special, see Lemma 8.1, the argument in the proof of [67,Theorem 2.6] gives first the injectivity of Ψ. Then the invariance of domain argument of the same proof yields the surjectivity of Ψ.

Definition and properties of functions u j
Due to Proposition 8.4 there exists a choice of points P j , j = 1, . . . , g such that the corresponding meromorphic differential ω P satisfies where the equalities hold modulo 2πiZ. Note that ω P is varying with n. We consider n as fixed and work with ω P satisfying (8.15) throughout this section. Of course, ω P also satisfies the conditions given in Definition 8.2.
We are going to integrate ω P along paths on R that start from ∞ 1 (the point at infinity on the first sheet) and that on each sheet remain in the same quadrant. So the paths do not cross the contours Σ j , j = 0, . . . , g and also do not intersect the imaginary axis except along the cut S(σ 2 − µ 2 ) that connects the second and third sheets. There may be a choice in the cut that one takes when passing from the first to the second sheet. However, this will lead to the same value for the integral because of the vanishing of the integral of ω P over the cycles A k and φ(A k ), see (8.11). Note that the exceptional A-period (8.12) in Case III does not play a role here.
With this convention for the paths we define functions u j as follows.
Definition 8.5. For j = 1, 2, 3, 4 and z ∈ C \ (R ∪ iR) we define u j (z) as the Abelian integral Abusing the notation, we also write where z is considered as a point on the jth sheet. The path from ∞ 1 to z ∈ R j follows the convention described above, namely that on each sheet it stays in the same quadrant. Then the functions u j are defined and analytic on C \ (R ∪ iR) with the following jump properties. In what follows we write ≡ to denote equality up to integer multiples of 2πi.
unless P k is on the first sheet and x = π(P k ) unless P k is on the second sheet and x = π(P k ) (8.19) unless P k is on the third sheet and x = π(P k ) (8.20) unless P k is on the fourth sheet and x = π(P k ) where k = N/2 in Case III and k = N in Cases IV and V.
(e) On the imaginary axis we have Proof. This is a straightforward but rather tedious verification. All identities or equivalences come down to the calculation of a period of a closed loop on R for the meromorphic differential ω P .
For example, to prove (8.26) for x ∈ iR + , we note that where C is the cycle on R that is shown in Figure 8.4 for the Cases IV and V. This cycle can be deformed to a sum of −φ(A N ), the cycles −A j , j = 1, . . . , N − 1 and a closed loop around −ic 2 . This indeed leads to (8.26) since ω P has vanishing A-periods and φ(A) periods, and the only contribution comes from −ic 2 which gives us ±πi.
In the Case III (not shown in the figure), the cycle C can be deformed into the cycles, −A j , j = 1, . . . N − 1, that include the exceptional cycle −A N/2 . It is now because of (8.12) that we obtain (8.26) The other relations follows in a similar way.
We state without proof the behavior of the functions u j near the branch points. They follow from residue conditions (8.8).  (d) If P k is on the jth sheet of the Riemann surface, then where k = 1, . . . , g.
If P k coincides with one of the branch points then the behavior should be modified. This should be obvious and we do not give details here.

Definition and properties of functions v j
The functions v j are the exponentials of the functions u j which we again consider as functions on C \ (R ∪ iR).
The jump properties of v j follow from Lemma 8.6. We state them in a vector form.
with a jump matrix J v that takes the following form.
(a) On the real axis the jump matrix has the block form for k = 0, . . . , N , and (b) On the imaginary axis the jump matrix has the block form with the 2 × 2 block Note that each of the blocks (J v ) j , j = 1, 2, 3 has determinant −1. From Lemma 8.7 and Definition 8.8 we obtain the behavior of the v j functions near the branch points and other special points.

The first row of M
We now define the entries in the first row of M .
Definition 8.11. We define for z in the first and third quadrants, −v 3 (z) for z in the second and fourth quadrants, (8.32) for Re z < 0. (8.33) It turns out that with this distribution of ± signs the row vector (M 1j ) has exactly the correct jump properties that are required in the RH problem for M .

The other rows of M
We will now construct the other rows of M out of the first row.
Lemma 8.13. The vector space of meromorphic functions on R (including the constant functions) whose divisor is greater than or equal to Proof. Let us denote, for a positive divisor D , the space of meromorphic functions on R whose divisor is greater than or equal to −D by L(D ). Let D = g j=1 P j . Any F ∈ L(D + 3∞ 2 ) has a pole at ∞ 2 of order at most 3. Therefore, if w is a local coordinate near ∞ 2 , we have for certain numbers f 1 , f 2 , f 3 . The kernel of the linear map is equal to the space L(D). Since D is non-special, the space L(D) contains only constant functions and therefore is of dimension 1. Therefore by the dimension theorem for linear maps However, by the Riemann-Roch theorem (see e.g. [53]), we also have This proves the lemma.
We continue to use Let F j be a function in L(D +3∞ 2 ). We use F jk to denote the restriction of F j to the kth sheet and we consider the row vector Note that the possible poles of F jk at the points π(P l ) are cancelled by the zeros of M 1k at the same point. Therefore the row vector remains bounded at these points and it has the same behavior near the branch points as the first row. It is easy to check that the row vector has a jump on R ∪ iR with jump matrix J M . In order to construct the other rows we therefore aim to find three functions F 2 , F 3 and F 4 in L(D + 3∞ 2 ) such that the corresponding row vectors also satisfy the asymptotic condition for the respective rows 2, 3 and 4 in the RH problem for M . This will be done in the next proposition.
The first condition (8.37) is satisfied if and only if F 2 (∞ 1 ) = 0. The other conditions determine the behavior of F 2 near ∞ 2 .
There is an expansion where w is the local coordinate near ∞ 2 which we choose to be equal to z −1/3 in the first quadrant of the second sheet. Its behavior in other quadrants and on other sheets is determined by analytic continuation. Thus as z → ∞ in the first quadrant. We also have as z → ∞ in the first quadrant with a nonzero first coefficient Then inserting (8.41) and (8.42) into the condition (8.38) we obtain a linear system of equations for the unknowns f 3 , f 2 , f 1 that has a unique solution. These three conditions together with the fact that F 2 (∞ 1 ) = 0 determines F 2 uniquely. Now it requires an independent check that the conditions (8.39) and (8.40) are satisfied as well with the same function F 2 , and also the analogous conditions that come from the asymptotic condition in the other quadrants. This then completes the construction of the second row of M .
The functions F 3 and F 4 are found in a similar way and they are used to construct the remaining rows of M .
From the construction it follows that the entries of M and M −1 are uniformly bounded in n away from the branch points and infinity. More precisely, we have the following proposition. (c) the third columns of M and M −t are bounded away from the points ±ic 2 , ±c 3 and ∞, with a bound that is uniform in n.
(d) the first columns of M and M −t are bounded away from the points ±c 3 and ∞, with a bound that is uniform in n.
Proof. We will only proof part (a) as the others follow by similar arguments. Let us start with the first column of M . From the structure of the RH problem we see that the entries of M in the first column are analytic in C \ [a 1 , b n ], bounded near ∞ and have an analytic continuation across each interval (a k , b k ) and (b k , a k+1 ). Hence the entries are bounded if we stay away from {a 1 , b 1 , . . . , a N , b N }. Moreover, M depends continuously on the parameters nα k modulo integers. By compactness of the parameter space, it then follows that we can choose the bound for the entries such that they hold uniformly in n.
As for the first column of M −t , we note that M −t satisfies a RH problem that has the same structure as the RH problem for M . Then the statement follows from the same arguments. Alternatively, the statement for M −t follows from the identity where M is the solution of the RH problem (8.1) but with parameters −α k instead of α k for k = 1, . . . , N . The identity (8.43) follows from the fact that both sides solve the same, uniquely solvable, RH problem.
The following corollary will be used in the next section. The global parametrix M will not be a good approximation to S near the branch points. Around the branch points we construct a local parametrix in a fairly standard way with the use of Airy functions. We will not give full details about the construct here but only give the relevant formulas with some comments.

Statement of local RH problems
Let be the set of branch points. Note that ±ic 2 an ±c 3 are only branch points if they are non-zero.
There is a possibility that ±c 3 coincides with one of the end points a k , b k of the support of µ 1 . This is a case that could be handled just as well but the formulas are slightly different and we prefer not to give full detail in this case. Thus we assume We take a sufficiently small disk D p around each of the branch points p ∈ BP . The disks are mutually disjoint. We also make sure that the disks D a k and D b k are small enough such that they do not intersect with the lips of the global lens around S(µ 3 ), and similarly the disks D ±c 3 do not intersect with the lips of the lenses around S(µ 1 ). Also the disks around ±ic 2 are small enough so that they do not intersect with the lenses around S(µ 1 ) and S(µ 3 ).
We use D = p∈BP D p to denote the union of the disks. Then we would like to find a parametrix P : D \ Σ S → C 4×4 that has the same jumps as S on the parts of Σ S in D.
There is a very minor complication here in case that a k or b k belongs to R \ S(µ 3 ) = (−c 3 , c 3 ). Then the jump matrix J S for S has in its right lower block (J S ) 3 an off-diagonal entry that is exponentially small in a full neighborhood of a k or b k , see the second formula in (7.25). Being exponentially small this entry will play no role in what follows. However, for the construction of the local parametrix it is more convenient to set this entry equal to 0. This is also what we did when defining the part (J M ) 3 in the jump matrix for M , see (8.4). Thus we use (J M ) 3 for the jump matrix (J P ) 3 on the real line in D a k and D b k , see (9.6) below. A similar thing happens in case ±c 3 ∈ R \ S(µ 1 ). Then an exponentially small entry is in (J S ) 1 , see the second formula in (7.23) that we set equal to zero in the jump matrix for P .
Thus in D p where p is one of a k , b k , for k = 1, . . . , N , we take If c 3 > 0 then in D c 3 and D −c 3 , we have again the block structure (9.2), but now we put

Airy functions
Around each branch point the construction of P is essentially a 2×2 problem, that can be solved in a standard way using Airy functions, see [33] for the 2 × 2 case and [20,45] for larger size RH problems.
The RH problem for Airy functions is the following. It will be stated in terms of an auxiliary ζ variable. where the contour Σ A and the jump matrices J A are shown in Figure 9.1.
The solution A of the Airy RH problem is the main building block for the local parametrix P .

Parametrix P in D b k
In the neighborhood D b k of a right-end point b k of the support of µ 1 the local parametrix P takes the form (9.10) P (z) = M (z) To describe the 2 × 2 block P 1 (z) we use the solution A(ζ) of the RH problem (9.8), the functions λ 1 and λ 2 that come from (4.35) (and that also appear in the jump matrix (J P ) 1 = (J S ) 1 , see (7.23)), and a function (9.11) f (z) = 3 4 (λ 1 (z) − λ 2 (z)) ± 3 2 πiα k for ± Im z > 0, z ∈ D b k \ R.
It turns out that f has an analytic extension to D b k which maps b k to 0 (to check this one uses the jump properties (4.37)-(4.38) for λ 1 and λ 2 , among other things). Shrinking D b k if necessary, one has that ζ = f (z) is a conformal map from D b k to a convex neighborhood of ζ = 0 with f (b k ) = 0, f (b k ) > 0 and which is real for real z. We adjust the lens around (a k , b k ) in such a way that the lips of the lens within D b k are mapped by f into the rays arg ζ = 2π/3.

Parametrix P in D a k
The construction of P in a neighborhood D a k of a left endpoint a k of S(µ 1 ) is similar. Here we have a map (9.13) f (z) = 3 4 (λ 1 (z) − λ 2 (z)) ± 3 2 πiα k−1

2/3
if a k > 0, 3 4 (λ 1 (z) − λ 2 (z)) ± 3 2 πiα k−1 ∓ 1 2 πi 2/3 if a k < 0, for ± Im z > 0, z ∈ D a k \ R, with f (a k ) = 0, f (a k ) < 0. If necessary, we shrink the disk D a k and adjust the lips of the lens around (a k , b k ) such that ζ = f (z) is a conformal map in D a k onto a convex neighborhood of ζ = 0 that maps Σ S ∩ D a k into Σ A . Then the local parametrix takes the form (9.14) P (z) = M (z) for z ∈ D a k \ Σ S , ± Im z > 0.
The fourth roots f (z) ±1/4 are taken to be analytic in D a k \ [a k , b k ] and positive for real z < a k , z ∈ D a k .

Parametrix P in D ±ic 2
This case is only relevant if c 2 > 0 and so we assume c 2 > 0. The parametrix in D ±ic 2 takes the form for z ∈ D ic 2 \ iR, ± Re z > 0, which (with an appropriate understanding of the 3/2-power) has an extension to a conformal map on D ic 2 with f (ic 2 ) = 0 and f (ic 2 ) ∈ iR + . Then f (z) > 0 for z = iy ∈ D ic 2 ∩ iR, y < c 2 . We take P 2 in D ic 2 as for z ∈ D ic 2 \ Σ S .
The construction of P in D −ic 2 is very similar. By symmetry we can also obtain it from P in D ic 2 by means of the formula This case is only relevant if c 3 > 0 and so we assume c 3 > 0. The parametrix P in D ±c 3 takes the form (9.19) P (z) = M (z) where a 2 × 2 block P 3 (z).
It is constructed with the function (9.20) for ± Im z > 0, z ∈ D −c 3 \ R.
Then f has an analytic extension to D −c 3 which maps ±c 3 to 0. Shrinking D ±c 3 if necessary, one has that ζ = f (z) is a conformal map from D −c 3 to a neighborhood of ζ = 0 with f (−c 3 ) = 0 and f (−c 3 ) > 0. We adjust the lens around (−∞, −c 3 ) in such a way that the lips of the lens within D −c 3 are mapped into the rays arg ζ = 2π/3. Then P 3 is given by For the construction in D c 3 we can use the conformal map f with the same definition as in (9.20), but now considered in the neighborhood of c 3 .
Then we have f (c 3 ) = 0 and f (c 3 ) < 0, and after adjusting of the lenses we then define P 3 as

Final transformation
Having the global parametrix M and the local parametrix P we are ready for the fifth and also final transformation. Then R is defined and analytic outside the union of Σ S and the boundaries of the disks around the branch points. However, since the jumps of S and M agree on S(µ 1 ) ∩ S(µ 3 ) and on S(σ 2 − µ 2 ), the matrix R has analytic continuation across the parts of these supports that are outside the disks. We also have that the jumps of S and P agree inside the disks D ±ic 2 and so R has analytic continuation in the interior of these two disks. In the disk D a k , D b k and D ±c 3 the jumps of S and P may not be exactly the same. They could differ by an exponentially small entry on the real line inside these disks. The jumps on the lips of the lenses are the same inside the disks, and so R has an analytic continuation across these lenses, but R could have an exponentially small jump on the real line inside the disks.
The result is that R has an analytic continuation to C \ Σ R for a certain contour Σ R and that R satisfies the following RH problem. M (z)P (z) −1 , z ∈ p∈BP ∂D p , P − (z)J S (z)P + (z) −1 , z ∈ Σ R ∩ p∈BP D p , M − (z)J S (z)M + (z) −1 , z ∈ Σ R \ p∈BP D p .
All of the jump matrices are close to the identity matrix as n → ∞. Indeed because of the matching condition in (9.7) we have J R (z) = I 4 + O(n −1 ) as n → ∞ uniformly for z ∈ p∈BP ∂D p . The other jumps are exponentially close to the identity matrix as n → ∞ with a bound that improves as z → ∞. Indeed we have J R (z) = I 4 + O(exp(−cn(|z| + 1)) as n → ∞ uniformly for z ∈ Σ R \ p∈BP ∂D p . This follows as in [45, section 8.4]. Observe that from the definition (9.23) and the asymptotic behaviors of S and M as given in (7.20) and (8.1) we first find that R(z) = I 4 + O(z −1/3 ) as z → ∞. However J R − I 4 is also exponentially decaying as z → ∞ with n fixed, and therefore the better bound R(z) = I 4 + O(z −1 ) in (9.24) indeed holds.
From this we conclude as in [45] Proposition 9.2. There is a constant C > 0 such that for every n, R(z) − I 4 ≤ C n(|z| + 1) uniformly for z ∈ C \ Σ R .
This concludes the steepest descent analysis of the RH problem for Y . Proof. By (9.23) and the fact that x is outside the disks, we have S −1 + (y)S + (x) = M −1 + (y)R −1 + (y)R + (x)M + (x), (9.26) if y is close enough to x. Note that R may not be analytic in a neighborhood of x. However, by deforming contours into the lower half plane we see that R does have an analytic continuation to a neighborhood of x where the same estimate of Proposition 9.2 is valid. Now write R −1 + (y)R + (x) = I + R −1 + (y) (R + (x) − R + (y)) . Then by Cauchy's Theorem for some r > 0. Combining this with Proposition 9.2 leads to R −1 + (y)R + (x) = I + O x − y n as y → x.
Now inserting this in (9.26) and using Corollary 8.16 gives the statement.
We follow the effect of the transformations Y → X → U → T → S → R on the kernel K  Then by the transformation X → U given in (6.1) we get The transformation U → T given by (7.6)-(7.8) only acts on the lower right 2×2 block, and does not affect the expression (9.30) for the correlation kernel. We get 2πi(x − y) 0 e n(λ 2,+ (y)−λ 1,+ (y)) 0 0