Extracting the Dark Matter Mass from Single Stage Cascade Decays at the LHC

We explore a variant on the MT2 kinematic variable which enables dark matter mass measurements for simple, one stage, cascade decays. This will prove useful for constraining a subset of supersymmetric processes, or a class of leptophilic dark matter models at the LHC. We investigate the statistical reach of these measurements and discuss which sources of error have the largest effects. For example, we find that using only single stage cascade decays with initial state radiation, a measurement of a 150 GeV dark matter candidate can be made to O(10%) for a parent mass of 300 GeV with a production cross section of 100 fb and 100 fb^(-1) of integrated luminosity.


Introduction
The Large Hadron Collider (LHC) is operational, and taking data, with expected gains in energy and luminosity over the next few years. One important mission for the LHC will be to create dark matter (DM) which appears as missing energy in the reconstructed event. Following a significant missing energy observation, the challenge will be to measure the properties of the DM candidate with sufficient accuracy to compare against cosmological and astrophysical constraints, such as the observed DM relic abundance and direct and indirect detection experiments. Thus, determining the mass of the DM particle will have tremendous ramifications for astrophysics and cosmology.
Making DM mass measurements at the LHC, for example in models of supersymmetry (SUSY) or Universal Extra Dimensions (UED), is a difficult problem, since the DM particle is typically produced in pairs as products of complicated decay chains of parent particles. In fact, the number of states participating in the event can vary dramatically depending on the specific model. The identities, couplings, and masses of the particles involved in these processes may be unknown. Let n be the number of steps in the cascade between the production of the parent and the appearance of the DM child in the event. For n > 1, if all visible particles in the decay are detected, all masses of the parent, intermediate and visible and invisible child particles can, in principle, be determined uniquely (see for example [1] for a discussion). The simplest case of n = 1 proves to be more challenging. In Fig. 1 we show a schematic of an n = 1 process. We have also included the possibility that additional visible states are produced before the parents, which we refer to as Up-Stream Radiation (USR). In Sec. 2 below, we will discuss the relevance of USR for DM mass determination. Refs. [2,3] also study n = 1 decay chains. The motivation for studying DM mass determination in n = 1 processes is many fold -we mention two here. First, within SUSY or UED, n = 1 processes with additional USR can be important. For example, decays˜ ± → ±χ0 with initial state radiation, and q → jχ ± → j ±ν (for a sneutrino lightest SUSY particle), are of the type shown in Fig. 1, where˜ ± is a slepton, ± is a lepton,χ 0 is a neutralino,q is a squark, j is a jet,χ ± is a chargino andν is a sneutrino. Although higher n chains may also be present in many models, the combinatoric backgrounds can make mass extraction in such decay chains complicated. By contrast n = 1 events are clean, and involve only two visible objects plus missing transverse momentum (hereafter referred to as missing energy). Also, since one will potentially observe n = 1 chains if one of these theories is correct, it will be useful to extract as much information as possible from these signals. Second, the observations of astrophysical anomalies, e.g. PAMELA [4] and Fermi [5], have led many to conjecture that the DM is leptophilic. Models which generate such signals can, for example, be constructed by connecting the DM to the lepton asymmetry [6], or by positing that mixed sneutrinos constitute the DM [7]. The simplest such dark sectors involve only a new mediator state and the leptophilic DM state, so that the DM is produced at a collider through the leptonic decay of the mediator. Hence, the study of these processes is well motivated. The reader is referred to Appendix A for more detail on models where n = 1 decay chains with USR are important.
As shown in Appendix B, the phase space for n = 1 processes without USR depends on the combination µ = (m 2 p − m 2 c )/(2 m p ) and weakly onŝ/(4 m 2 p ) where m p is the parent mass, m c is the child mass and √ŝ is the partonic center-of-mass energy. Hence, extracting µ is simple, while measuring m p proves to be more challenging. Current experimental methods for mass determination in events with missing energy rely on matrix element techniques. Here, one begins by assuming a model which implies a matrix element with additional dependence on m p . Then by fitting measured differential distributions, one can extract, in addition to the combination µ, the overall mass scale m p by observing how quickly the event rate falls off with √ŝ .
In this paper we explore a different technique where the overall mass scale is determined from the transverse boosts given to the parent particles by USR. Since the boost depends only on m p , i.e. it is independent of the matrix element, the result is a model independent method for determining the overall mass scale. We explore a particular M T 2 variant proposed in [8], which utilizes events with USR to separately extract the parent and child masses. We carry out the first full scale simulation of these M T 2 based variants for dark matter mass determination, including detector effects, emphasizing the size of statistical errors and discussing various difficulties this method presents.
The outline of this paper is as follows. We begin with a discussion of the M T 2 variable, and the possibility of extracting parent and child mass separately in n = 1 events with USR. Next we turn to a numerical analysis of this M T 2 based method and its efficiency in DM mass determination for a given number of n = 1 events at the LHC. We then discuss additional sources of error beyond those explicitly contained in the previous section. Finally, we conclude. In Appendix A we outline some example models where this method would be relevant and in Appendix B we show how the phase space for n = 1 processes depends on the M T 2 endpoint.

M T Preliminaries
We begin by reviewing the M T 2 variable [9]. Since the LHC is a hadron collider, the initial parton longitudinal momenta are unknown. Hence, only the total transverse momentum is constrained to be zero, and thus it becomes necessary to use transverse variables, such as M T 2 , a generalization of the transverse mass (see also [11,12]). For the class of processes studied here (see Fig. 1), there will be two missing particles in each event, so that the 4momenta of the invisible child particles cannot be determined. Thus, only the total transverse missing momentum, p miss By including the USR, Eq. (2.2) no longer obtains, and instead the total momentum of the visible and invisible particles must be balanced against the momentum of the radiation, Now the M T 2 endpoint will depend on the upstream momentum [1,3]: The functional form for the M T 2 endpoint depends on whether the test mass is larger or smaller than the true DM mass. Hence, there is a discontinuity in the derivative with respect to the trial child mass of Eq. (2.8) above and below the true DM mass, m c , giving rise to a kink [2,13] in the M max T 2 (m c , P T ) curve which can be utilized for extracting additional information beyond Eq. (2.5). In principle, given an event with a specific value for the P T of the USR, one can now extract the parent and child masses. However, since one must do this analysis for a particular bin in P T , there is competition between the size of the binsmall bins imply small statistical samples -and the accuracy of the measurement.
Another method was proposed in [8], which sidesteps the problem of binning by utilizing the whole range of P T . From Eqs. (2.5) and (2.8), it can be seen that M max T 2 is unchanged by the effects of the P T whenm c = m c . Furthermore, it has been shown [8] that where the equality only holds whenm c = m c . Thus one can construct a new variable [8] N (m c ) ≡ all events (m c ) is the measured value of M T 2 (m c ). It is this variable we will be minimizing to find the correct child mass. In Fig. 2, we plot N (m c ) vs.m c for m p = 300 GeV and m c = 150 GeV. Since the shape is "bowl"-like, we refer to this construction as an M T 2 bowl. Unless otherwise specified, all events were simulated with the MadGraph 4.4 event generator [14], showered by PYTHIA 6.4 [15], and run through the detector simulation software PGS 3.3 [16]. Note that we use the MadGraph default settings which defines a lepton as having p T > 10 GeV and a jet as having p T > 20 GeV.

Mass Determination from M T 2 Bowls
In this section we will calculate the statistical errors for child mass determination with M T 2 bowls. Clearly, Eq. (2.9) only depends on the kinematics of the event, i.e. it is independent of the quantum numbers, including the spin, of the underlying particles. Then, up to small corrections due to the steepness of the M T 2 distribution about this endpoint, there are only O(1) differences in the bowls around the minimum for different parent spins. Hence, we can study the effectiveness of this variable for a wide variety of models by only scanning over the masses of the parent and child particles. We take For reference we provide the overall cross section for these benchmark models in Table 1, where we have assumed that the production occurs via electroweak processes, including the effects of QCD ISR. Neglecting diagrams which involve additional new-physics states, the overall rates only depend on the spin of the parent up to O(1) factors due to the choice of SU (2) × U (1) representation. For reference, the scalar example process is p p →˜ +˜ − → + −χ0χ0 where˜ ± is a slepton andχ 0 is the lightest neutralino. This is the process we simulate for our benchmarks with QCD ISR. For reference, a fermionic example process is p p →χ +χ− → + −νν * whereχ ± is a chargino andν is a sterile sneutrino. For some details of these and other models which have n = 1 processes, see Appendix A. Our results below will be given in terms of the number of events before cuts, so Table 1 can be used to estimate the reach of actual models. m p σ scalar σ fermion 100 GeV 0.4 pb 20 pb 300 GeV 9 × 10 −3 pb 0.4 pb 500 GeV 10 −3 pb 6 × 10 −2 pb Table 1: Cross sections for electroweak pair production of parent particles with various masses and spins including the effects of QCD ISR. We neglect any t-channel processes involving additional states.
There are also models which have more complicated decay chains but can be interpreted as n = 1 processes with additional USR. For example, one can have new colored objects which decay to jets and the parent particle. As long as the USR can be distinguished from the decay product of the parent, our method is applicable. This will improve the prospects for this method dramatically since the overall rate will increase due to colored production instead of electroweak production, and additionally the majority of events will have very hard P T for the USR. Hence, we also choose a set of benchmark models with colored objects up-stream with masses where m col is the mass of the colored state which decays to the parent particle and jets. The example process we will simulate for these benchmarks is p p →qq → j jχ +χ− → + −νν * whereq is a squark,χ ± is a chargino andν is a sterile sneutrino. While one might be able to use additional handles from viewing such events as n = 2 processes (instead of n = 1 with USR), here we wish to examine only the effect of harder USR from colored particle decay on the error for DM mass determination in n = 1. For additional models of n = 1 processes which can be produced in the decays of colored states see Appendix A.
As discussed above, the additional radiation shifts all events, including those near the M T 2 endpoint. For reference, the radiation distributions for our benchmark models are shown in Figs. 3 and 4. From Eq. (2.8), the correction to the M T 2 endpoint due to USR is of the form P T /m p . Hence, the P T distribution of jets determines how well the parent and child masses can be extracted separately. From Fig. 3, we see that heavier parents lead to harder P T distributions due to the larger recoil occurring from production of a heavier state. However, since the correction to M T 2 goes as 1/m p , this enhancement is tempered by the parent mass. In addition, heavier parents have smaller production cross sections (see Table 1). Hence, assuming they can be seen above the backgrounds, lower parent mass states give rise to more defined bowls. The trade-off between background rejection, which is optimized for high masses, and the quality of the M T 2 bowls, which is optimized for low masses due to the dependence on the ISR, leads to a sweet spot in the range of O(200 GeV) to O(500 GeV), with significant dependence on the spin of the parent. In the cases with colored states upstream this tension is alleviated since now the P T distributions are harder and the production cross sections are larger, as in Fig. 4. The effects of the radiation on the M T 2 endpoint are shown by plotting N (m c ) as a function ofm c in Fig. 2, for 50,000 smuon pair production events with two muons and missing energy, with no background events (also see Fig. 11). As we will show in the next section, the backgrounds can be very efficiently cut away, and will be insignificant near the M T 2 endpoint (see Sec. 4.2). In what follows, we will present statistical error bars on the DM mass determination using the M T 2 bowl and will discuss in detail various sources of error and their effect on this analysis.

Statistical Analysis of M T 2 Bowls
Contributions to adjacent bins in the M T 2 bowls from the same events imply that it is inappropriate to use simple √ N statistics in computing errors. Removing one event from a given bin in the distribution can in principle remove one event from each bin. Therefore, we utilized the well-known "bootstrapping" method to do the statistical error analysis. We employed the following method when doing this. We begin by generating a sample of O(100, 000) signal events (we take √ s = 14 TeV). From those 100, 000 events, we choose a subset of size N events , and make 100 independent random selections of N events events from the original data set. Then for each of these selections we calculate N (m c ) using Eq. (2.11). This gives us a random sampling of bowls for a given number of events. Since there is often a degeneracy of minima for each of these random bowls, especially for a low number of events, we take the geometric mean of these multiple minima to give us an average minimum for each bowl. Note that we do this assuming the theoretical value of M max T 2 (see Sec. 4.3). Finally, we find the mean and standard deviation of these 100 average minima. To find the standard deviation we used the formula (x i − x mean ) 2 /(N − 1) and checked to confirm that this corresponds to 1-σ error for a Gaussian distribution to good approximation.
This method allows for a statistical sampling of the distribution of possible bowls for a given number of events. We present our results as a function of N events , the number of events before any cuts are made. Note that the events which contribute to the bowl have very special kinematics which allow them to go beyond M max T 2 -the overwhelming majority of events will not have any bearing on the mass determination. Hence, cuts designed to remove backgrounds will not cut away these special events which contribute near the minimum of the M T 2 bowl where the DM mass determination occurs. This is an expectation we check explicitly in the next section. * Also note that by working with the mean we will * This assumption is not true when M max systematically underestimate the DM mass due to the asymmetric shape of the bowl. This asymmetry is due to the shape of the M T 2 distribution near the endpoint as a function of m c -the slope becomes steeper asm c is taken larger. The events used for the bowls were generated using the PGS detector simulator so that they do include detector effects which also adds to the consistent underestimates. As we discuss in Sec. 4.1, detector simulations must be utilized to determine the required correction to account for this off-set. Further sources of error are discussed below in Sec. 4.
In Figs. 5 -8, we show the statistical error bars for the DM mass determination for a given parent and child mass combination as a function of the number of events before cuts. Note that for a given child mass, the error bars grow smaller as the DM mass approaches the parent mass, due to the width of the minimum of the bowl. This occurs because the minimum of the bowl becomes more well-defined as the M T 2 distribution becomes steeper. The error bars grow smaller as N events grows larger, but not as quickly as 1/ √ N events . This is because events contribute to multiple bins so that errors from adjacent bins are correlated. Also notice that error bars in Fig. 8 are much smaller for a given N events than those in Fig. 6 for m c = 150 GeV. The error bars are also smaller for larger values of m col . This is due to the enhanced P T of the USR as shown by comparing Figs. 3 and 4. In what remains we will discuss the various additional errors and will argue to what degree we expect them to degrade the results.     These colored states are squarks which produce chargino parents and jets. The parent mass is 300 GeV and the child mass is 150 GeV for all three cases. The mass of the colored objects are 1400 GeV (green), 1000 GeV (blue) and 600 GeV (red) from top to bottom. As explained in the text, we find that cuts designed to eliminate the background will not change these results.

Sources of Error
The results of Figs. 5 -8 only incorporate statistical and detector effects. In this section we argue that the errors we have included in our analysis are a realistic estimate of the precision with which the DM mass can be extracted from simple cascade decays. We further qualify the additional sources of error below.

Detector Effects
With the inclusion of detector effects, the events at the M T 2 endpoint become smeared out. This implies that some events which do not have the correct kinematics to make a contribution to the bowl can have M T 2 > M max T 2 . This leads to a degradation of the minimum of the bowl. Since the M T 2 distribution is steeper for larger test masses, this degradation will tend to contribute to a larger underestimate of the DM mass. This is the reason for the systematic under-shooting of the DM mass in Figs. 5 -8. To illustrate this effect we have generated the analog of Fig. 6 for parton level events as shown in Fig. 9. Note that the 1-σ error bars overlap with the actual DM mass except in the case where m c = 75 GeV since here the bowl is essentially flat belowm c ∼ 75 GeV (see Fig. 11). Hence detector simulations would have to correct for this systematic effect in any real DM mass measurement.
After generating bowls using the parton level events, the value N (m c ) (see Eq. (2.11)) at the minimum is ∼ 0. For the same bowls, but with detector effects, the value N (m c = m c ) is no longer 0 -for O(100, 000) events, N (m c = m c ) ∼ O(100). Hence one can attempt to clean up the bowl by removing the events from the data sample which contribute at the minimum. This will increase the steepness of the bowl and might be helpful in minimizing the error since all removed events are guaranteed to be pathological. However, since this cleaning process does not change the minimum, this will not change the error bars presented above.

Background Contamination and Cuts
In this section we will argue that a generic set of cuts designed to remove backgrounds will not degrade the minimum of the M T 2 bowl and hence will not affect our conclusions. Motivated by the choices taken in [17], we have analyzed the following cuts for illustration, which are relevant for di-lepton events with jets and missing energy (i.e. slepton pair production): 1. Require 2 opposite sign, same flavor leptons (e or µ). While cuts should be tailored to the particular model under consideration, these are fairly generic, and will serve to illustrate the point that our results are not significantly degraded by background removal. We also explored the effect of a cut on M T 2 by requiring M T 2 (m c = 0) > 100 GeV. These cuts will be very efficient for eliminating standard model (SM) backgrounds, the worst of which is W + W − plus jets, where the W ± bosons decay leptonically. In particular, this di-boson process is dominated by tt production.
An M T 2 cut on the tt background is a powerful discriminator, and in many cases it will have no effect on the DM mass determination. To see this, first note that the tt background falls into the same class of n = 1 processes we have been studying already, with the tops as the colored particles leading to hard USR, the W ± as parents and the neutrinos as children. Since the child is a neutrino, m c = 0, and the minimum of the bowl will occur atm c = 0. Then (neglecting detector effects which will only add a small perturbation) the tt background will be largely eliminated for an M T 2 cut of O(100 GeV). In Fig. 10 we plot this M T 2 distribution including detector effects. Clearly, there is an endpoint at m W . The cross section for tt → bb µ − µ + ν µνµ is 5 pb. Then starting with a 100,000 event sample, the cuts 1-6 described above reduce this background to 0.065 ± 0.002 pb. Then the M T 2 (m c = 0) > 100 GeV cut eliminates all remaining events. In this way, the worst of the SM backgrounds can be easily removed for m p > ∼ 100 GeV. process where the b-jets are USR and the W ± are the parent particles. This plot is made before cuts and we have included detector effects. There is an endpoint at m W since the child, i.e. the neutrino, mass is zero in these events.
In Fig. 11 we have plotted a series of M T 2 bowls before and after this set of cuts to check that the signal in the DM mass determination region of the M T 2 bowl is not degraded. For m p = 100 GeV there is a significant degradation of the bowl. However, for this value of m p , there will be tremendous difficulties disentangling the signal from the W + W − background since they have very similar M T 2 endpoints. For the models with heavier parents or with additional colored states producing hard USR, the minimum is maintained for these cuts. Additionally, the M T 2 cut has no effect on these plots (excluding the example with m p = 100 GeV). We also checked that this statement is robust under variations in the cut parameter choices made above.

T 2
In generating Figs. 5-8 we assumed that the M max T 2 endpoint has been measured precisely and matches the theoretical value. In [8], another M T 2 based variable, M T 2⊥ , was introduced, which is the projection of M T 2 along the direction perpendicular to the USR. They show that the endpoint of this distribution is independent of the USR momentum and identical to M max T 2 (m c , 0) endpoint. Hence, even in cases with large USR, it is possible to extract the required input to construct the bowls.
However, the level of accuracy with which M max T 2 (m c , 0) can be measured depends on detector effects. For the purposes of illustration, in Fig. 12, we show how the M T 2 bowl is degraded as one varies the M max T 2 endpoint by ±2% and ±5% for m p = 300 GeV and m c = 150 GeV. For variations on the order of −5% the minimum is shifted by a non-trivial amount and can even disappear in some cases. For overestimates of M max T 2 of order 5%, the width of the minimum becomes much broader than the statistical error bars presented above. Therefore, it is crucial to the success of this method that an accurate measurement of the M T 2 endpoint be made. On the other hand, the steepness of the bowl around the minimum is maximized for the correct choice of M max T 2 . By combining this observation with the direct measurement of the endpoint, the accuracy with which M max T 2 could be determined would be improved. Other variables, such as that suggested in [] could be used to determine the M T 2 endpoint. The accuracy with which this can be done is left for future work, though it can likely be done with high precision due to the larger amount of statistics available than for the bowls.

Discussion and Conclusions
In this work we studied the possibility of using n = 1 single stage cascade decays to measure the DM mass at the LHC. We have argued, using the particular M T 2 variant of [8], that if a signal is observable and backgrounds can be eliminated, it is possible to make O(10%) measurements of the DM mass with O(10, 000) events before cuts for optimal values of m p and m c . We have shown that this requires a precise determination of M max T 2 (m c , 0). In [18] the matrix element technique was used to ascertain how well the neutralino mass could be measured in an n = 1 squark decay for a benchmark model with a parent mass of 561 GeV and a child mass of 97 GeV. Using parton level events so that jet smearing effects, etc., are not considered, they found that with 3000 events before cuts only an upper limit on the child mass could be determined and with 7500 events a measurement could be endpoint of ±5% and ±2%. All bowls are made with 50,000 smuon pair production events before cuts. For clarity we have not simulated detector effects for these events.
made with an O(100%) error bar. This can be compared with our Fig. 7 for the benchmark m p = 500 GeV and m c = 125 GeV † . We find that with 3000 events we can make an O(70%) determination and for 7500 events error bar goes down to O(50%) once the correction for detector effects is applied as described above in Sec. 4.1. Hence, the methods seem to be competitive, but ultimately a detailed study will be required to determine which will lead to the best DM mass determination.
Finally, we would like to emphasize the model independence of these results, even when there are complicated cascade decays. A large class of events can be interpreted as n = 1 processes with USR. All that is required is that the only missing energy in event is produced at the end of the chain as the result of the decay of an on-shell parent, and that the USR be distinguishable from the decay product of the parent. When this isolation is possible (e.g. the two photon plus missing energy signal of some gauge mediated SUSY breaking models) our results can be applied up to differences due to detector effects.

Appendices A Benchmark Models
The n = 1 events studied here are the simplest class of events at the LHC which involve the DM. Perhaps the most commonly studied of such processes is p p →qq → j jχ 0χ0 . In such models, however, one expects there to be higher n processes present as well which will give additional kinematic information. In this appendix we will outline examples of n = 1 process with scalar, fermionic and vector parents. Estimates for the electoweak LHC cross sections for these models are given in Table 1.

A.1 Scalar Parents
We begin by motivating scalar parents. Recently, a wave of leptophilic DM models have been proposed to explain measured cosmic ray anomalies. A non-supersymmetric example, which is additionally motivated by the baryon-DM coincidence, can be constructed by simply extending the SM by two additional fields: a new Higgs doublet, H , and a leptophilic DM state, X, interacting via [19] L =XLH + m XX X. is generated, where M is the effective suppression scale. This operator transfers the lepton asymmetry to the DM sector, so that the DM density is set by an asymmetry and not thermal freeze-out. Also note that such leptophilic DM candidates can be viable as an explanation for the observation of an excesses of cosmic ray positrons by the PAMELA experiment [6].
Although the DM would be asymmetric (i.e. mostlyX) when its density freezes in, that asymmetry could be erased through Majorana mass terms forX and X. Then in the universe today, XX → + − may give rise to significant cosmic ray positron signals.
The DM would be created at the collider through the electroweak production of the H , However, production rates for p p → H H → XX + − will be low (see Table 1). While these events could be extracted from the large di-boson background with high luminosity, DM mass determination will be difficult.
Note that this process is identical to the electroweak pair production of sleptons (see [17] for a study which determines how feasible it is to find these processes at the LHC), (A.5)

A.2 Fermionic Parents
For an example with fermionic parents, we turn to a model which is embedded within the MSSM. Introduce a superfield DM candidate, X, with the quantum numbers of a sterile neutrino. Then the active sneutrino can mix with scalar partner for X,X, leading to mixed sneutrino DM. In [7],X has been shown to be a viable DM candidate. At the LHC, electroweak production can go through whereχ ± is a chargino. Since the parent particles are fermions instead of scalars, the production rates are larger (see Table 1).
With a slight modification, these classes of DM models can be related to the lepton asymmetry. One can add a new pair of electroweak doublet superfields, D andD, and a new superpotential term, where m D is the mass for D, m X is the mass for X and λ is a new yukawa coupling.
Integrating out these doublet states results in the lepton number transferring operator where M is the effective suppression scale. This operator can be used to generate the relic density. The production at the collider then goes through the electroweak production of the fermionicD: p p →D +D− →XX * + − . (A.9) Production rates in all these fermionic parent models can further be enhanced by embedding the n = 1 process into squark decays: p p →qq →χ +χ− j j →νν * + − j j (A.10) As described above (see Fig. 4), this will lead to a much harder USR distribution, which in turn will imply better DM mass determination.

A.3 Vector Parents
Lastly, we note that within UED models, pair production of vectors gives rise to similar signals. For example, p p → W (1)+ W (1)− → + − ν (1)ν(1) , (A.11) where W (1)± is a KK W -boson, ν (1) is a KK neutrino is an n = 1 chain. This process can similarly be embedded in the decay of new colored states, which gives rise to harder USR: where Q (1) is a KK quark. Note that if ν (1) is the DM its mass is restricted to be greater than O(50 TeV) by direct detection experiments [20].

B Phase Space Dependence on M T 2
To show the phase space dependence on M T 2 and the overall scale m p , we will assume that the parents are produced on-shell so that the 2 → 4 production in Fig. 13 can be approximated by the 2 → 2 cross section σ 2→2 and parent particle decay width Γ. Figure 13: Process considered in this section. The proton momenta are q i , the parent momenta are k i , the visible momenta are p v i and the child momenta are p c i .
We begin by simplifying the general 2 → 2 differential cross-section and 1 → 2 differential decay width. Throughout the calculation we will drop overall constants since they do not contribute to the normalized distributions. The 2 → 2 differential cross-section is given by dσ 2→2 = 1 4 | q 1 | CM √ŝ |M σ | 2 (2 π) 4 δ 4 (q 1 + q 2 − k 1 − k 2 ) 1 2 E 1 where √ŝ is the parton center-of-mass (CM) energy and E i is the energy of the i th parent. Integrating over k 2 in the CM frame to eliminate δ 3 ( q 1 + q 2 − k 1 − k 2 ) gives where k 2 = − k 1 . Similarly, we simplify the 1 → 2 differential decay widths where c and v stand for child and visible, respectively, and i = 1, 2. Integrating over p c i to eliminate δ 3 ( k i − p c i − p v i ) gives where the δ-function enforces p c i = k i − p v i . Since for 1 → 2 decays the summed and squared matrix elements |M Γ i | 2 are only functions of the masses, they will not contribute to the normalized distributions. We drop these factors from here forward.
Convolving the differential parent decay width with the differential 2 → 2 cross section, and again dropping overall constant factors gives Define cos β i to be the angle between the visible particle momenta and the parent particle: