1. INTRODUCTION
Global navigation satellite systems (GNSS) provide global precise positioning services and have become a mainstream technique of navigation. The key to quickly acquiring precise position with GNSS is known as the ambiguity resolution, which resolves an unknown cycle number in the carrier phase observation to integer. The ambiguity resolution enables instantaneous precise positioning, but also introduces reliability risk. Therefore, how to reliably resolve the integer ambiguity is one of the most concerning issues in the GNSS and navigation research communities (Feng et al., Reference Feng, Ochieng, Samson, Tossaint, Hernández-Pajares, Juan, Sanz, Aragón-Àngel, Ramos-Bosch and Jofre2012). Ambiguity resolution includes two procedures: ambiguity estimation and ambiguity validation (Teunissen, Reference Teunissen1995). The reliability of ambiguity estimation is measured by success rate, e.g. (Li and Teunissen Reference Li and Teunissen2011; Feng et al., Reference Feng, Ochieng, Samson, Tossaint, Hernández-Pajares, Juan, Sanz, Aragón-Àngel, Ramos-Bosch and Jofre2012). A realistic success rate is also important in integrity monitoring (Li et al., Reference Li, Li, Yuan, Wang and Hou2015). Unfortunately, success rate is not always easy to calculate, so bounding the success rates is used as a practical way to assess the reliability (Teunissen Reference Teunissen1998a; Reference Teunissen1998b). Meanwhile, a transformation procedure called reduction or decorrelation is often performed before ambiguity estimation to improve its efficiency and performance (Teunissen, Reference Teunissen1995; Hassibi and Boyd, Reference Hassibi and Boyd1998). The impact of decorrelation on the integer estimation success rate has been studied (Thomsen, Reference Thomsen2000; Verhagen, Reference Verhagen2003; Feng and Wang, Reference Feng and Wang2011; Verhagen and Teunissen, Reference Verhagen, Li and Teunissen2013), but the impact on the success rate bounds has not yet been systematically studied.
In this study, the methodologies of bounding integer estimator success rate are revisited and the performance of these bounds is evaluated. Particularly, the effect of decorrelation on the success rate bounds performance is addressed, which gives a better understanding of success rate bounds calculation.
2. PROCEDURE OF AMBIGUITY ESTIMATION
The carrier phase-based GNSS precise positioning model can be expressed as a mixed model:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn1.gif?pub-status=live)
where E(·) and D(·) are the mathematical expectation and dispersion operators respectively. a and b are integers and real-valued parameters respectively. The observation vector y is assumed to follow multivariate normal distribution and its variance-covariance (vc-) matrix is given as Q yy . The solution of this mixed integer model can be addressed in three steps:
-
• Estimating real-valued parameters
$\hat a$ ,
$\hat b$ and corresponding vc-matrix with a standard least-squares procedure. In this step, the integer nature of a is not considered.
-
• Ambiguity resolution. Mapping the real-valued ambiguity parameter
$\hat a$ to integer ambiguity
$\breve {a} $ with an integer estimators and validate the correctness of
$\breve {a} $ .
-
• Updating the real-valued parameters with
$\breve {b} = \hat b - Q_{\hat b\hat a} Q_{\hat a\hat a}^{ - 1} (\hat a - \breve {a} )$ .
2.1. Integer Estimation and its Success Rate
Our focus is how to map the real-valued ambiguity parameter
$\hat a$
to an integer ambiguity
$\breve {a} $
and this procedure is also known as integer estimation. There are three distinct estimators to map the real-valued ambiguity parameter
$\hat a$
to integer ambiguity
$\breve {a} $
, known as Integer Rounding (IR), Integer Bootstrapping (IB) and Integer Least-Squares (ILS) respectively. Each integer estimator uniquely defines a region, called ‘pull-in region’ and each pull-in region only contains one integer vector. If the real-valued ambiguity parameter
$\hat a$
falls in a particular pull-in region, then it will be fixed to a corresponding integer vector. The pull-in region of integer rounding is defined as (Teunissen, Reference Teunissen1998b):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn2.gif?pub-status=live)
where S IR,z is the pull-in region of the integer rounding estimator centred at z. x i is the ith component of x. ℝ n and ℤ n are the n-dimensional real-valued space and integer space respectively. The integer rounding estimator simply maps the real-valued ambiguity to the nearest integer dimension by dimension, so its pull-in region is a hyper-cube. A two-dimensional example of integer rounding pull-in region is shown in Figure 1. However the integer rounding estimator does not perform well in practice due to strong correlation between ambiguity components.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-77599-mediumThumb-S0373463316000047_fig1g.jpg?pub-status=live)
Figure 1. A demonstration of integer estimator pull-in region in a two-dimensional case; the presented pull-in regions are the integer rounding (left), the integer bootstrapping (centre) and the integer least-squares (right).
The integer bootstrapping estimator considers the cross correlation between ambiguity components by employing a sequential rounding procedure, which is defined as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn3.gif?pub-status=live)
where 〈·〉 means rounding to the nearest integer.
$\breve {a} _{B,i} $
is the ith component of the fixed ambiguity vector by the integer bootstrapping estimator. Correspondingly, the pull-in region of the integer bootstrapping estimator can be expressed as (Teunissen, Reference Teunissen1999; Reference Teunissen2001):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn4.gif?pub-status=live)
where c
i
is a n × 1 canonical unit vector with its ith entry equals to 1 and the remaining entries equal 0. L is a lower triangular matrix which fulfils
$Q_{\hat a\hat a} = LDL^T $
. Since the integer bootstrapping estimator employs the conditional variance, its pull-in region is a parallelogram in two-dimensional cases (see Figure 1).
The integer least-squares estimator addresses the integer by minimising the quadratic form, which is given as (Teunissen, Reference Teunissen1993):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn5.gif?pub-status=live)
The corresponding pull-in region is given as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn6.gif?pub-status=live)
The pull-in region of integer least-squares is defined by a series of projectors w. In two-dimensional case, it is a hexagon. The minimising problem Equation (5) cannot be solved directly, so a searching procedure is employed to find the optimal solution.
2.2. The Concept of Integer Estimation Success Rate
The pull-in region theory interprets the integer estimation from a geometrical prospective, but evaluating integer estimator performance by the pull-in regions is difficult. In this section, the integer estimation theory is investigated from the probability perspective, along with the method to evaluate the integer estimator reliability.
Since the observation vector y follows a normal distribution, the estimated float ambiguity parameter also follows a multivariate normal distribution, denoted as
$\hat a\sim N(a,Q_{\hat a\hat a} )$
. The mathematical expectation of
$\hat a$
is an unknown integer vector. The Probability Density Function (PDF) of
$\hat a$
is expressed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn7.gif?pub-status=live)
where |·| denotes the determinant operator. The stochastic characteristic of real-valued ambiguity is uniquely described by its vc-matrix
$Q_{\hat a\hat a} $
. The probability of
$\hat a$
falling in the pull-in region S
a
, which is known as the ambiguity estimation success rate, can be calculated by integral
$f_{\hat a} (x)$
over S
a
and denoted as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn8.gif?pub-status=live)
where
$\breve {a} $
is the fixed integer ambiguity vector. The equation indicates that the higher success rate means the real-valued ambiguity vector is more likely to be fixed to the true integer vector. Therefore the success rate is an important reliability indicator in ambiguity estimation. The success rate P
S
depends on
$f_{\hat a} (x)$
and S
a
, and hence it can be used to evaluate the performance of the integer estimator with given
$Q_{\hat a\hat a} $
or to evaluate the strength of the underlying model with the given integer estimator.
2.3. The Essence of Decorrelation
For the fast or even instantaneous ambiguity resolution case, the ambiguities are highly correlated. The search space is extremely elongated, which makes the search procedure in integer least-squares inefficient. In order to improve search efficiency, a decorrelation approach proposed by Teunissen (Reference Teunissen1993) is now widely used in GNSS ambiguity estimation. The combination of decorrelation and the integer least-squares is known as the Least-squares Ambiguity Decorrelation Adjustment (LAMBDA). The decorrelation procedure not only improves ILS efficiency, but also improves performance of IR and IB estimators.
The basic idea of the decorrelation is transforming
$\hat a$
and
$Q_{\hat a\hat a} $
with an invertible transformation, given as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn9.gif?pub-status=live)
Then, integer estimation is performed with
$\hat z$
and
$Q_{\hat z\hat z} $
. After the best integer candidate
$\breve{z} $
is identified, the best integer candidate
$\breve {a} $
can also be obtained by performing an inverse transformation:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn10.gif?pub-status=live)
Equations (9) and (10) indicate the transformation matrix Z has to be invertible. Besides this, there are two conditions to be an admissible ambiguity transformation (Teunissen, Reference Teunissen1995): integer matrix and volume preserving.
-
• The integer matrix means Z ∈ ℤ n×n and Z −1 ∈ ℤ n×n . This condition guarantees that the transformation preserves the integer nature of the ambiguity parameters.
-
• The volume preserving refers to
$ \vert {Q_{\hat a\hat a}} \vert = \vert {Q_{\hat z\hat z}} \vert $ . This condition guarantees
$ \Vert {\hat a - \breve {a}} \Vert _{Q_{\hat a\hat a}} ^2 = \Vert {\hat z - \breve {z}} \Vert _{Q_{\hat z\hat z}} ^2 $ . Hence, the transformation does not change the search result.
In decorrelation, the third condition on the Z matrix selection is that the off-diagonal entries of
$Q_{\hat z\hat z} $
are no larger than their counterparts in the
$Q_{\hat a\hat a} $
matrix. This condition ensures the transformation is a decorrelation procedure rather than an increase in the correlation procedure (Xu et al., Reference Xu, Cannon and Lachapelle1995)
According to the second condition, | Z| = ±1. Therefore the transformation matrix Z is a unimodular matrix (Cassels, Reference Cassels2012). Most decorrelation methods construct the unimodular matrix with a triangular matrix with its pivotal entries equal to 1. According to Cramer's rule (e.g. see Strang and Borre, Reference Strang and Borre1997), an integer unimodular matrix also has its inverse matrix as an integer matrix.
Decorrelation can be implemented by two distinct methods: the integer Gaussian transformation (Teunissen, Reference Teunissen1993) and Lenstra, Lenstra, Lovasz (LLL) method (Hassibi and Boyd, Reference Hassibi and Boyd1998). The integer Gaussian transformation employs a series of preliminary Gaussian transformations, each transformation decorrelating one entry in
$Q_{\hat a\hat a} $
. The details of the integer Gaussian transformation method are discussed in De Jonge and Tiberius (Reference De Jonge and Tiberius1996). The LLL method employs a vector-based reduction, which is a modified Gram-Schmidt orthogonalisation method. The details of the LLL method can be found in Grafarend (Reference Grafarend2000) and Xu (Reference Xu2001). It is noted that the integer Gaussian transformation method also involves a permutation procedure to flatten the condition spectrum, which also improves the search procedure efficiently (Teunissen, Reference Teunissen1995). After the permutation, the transformation Z is not necessarily a triangular matrix any longer, but it is still a unimodular integer matrix. Recently, the importance of permutation has been systematically studied. Xu et al. (Reference Xu, Shi and Liu2012) compared the impact of different permutation strategies on decorrelation performance. The permutation procedure has been applied to the LLL method, e.g. Jazaeri et al. (Reference Jazaeri, Amiri-Simkooei and Sharifi2012), but its performance is still not as good as the integer Gaussian transformation (Jazaeri et al., Reference Jazaeri, Amiri-Simkooei and Sharifi2014).
A two-dimensional example of ambiguity decorrelation is illustrated in Figure 2. The figure shows the 95% confidence regions of
$Q_{\hat a\hat a} $
and
$Q_{\hat z\hat z} $
, which reflect the impact of decorrelation on the distribution of
$\hat a$
. It is noted that the volume of the two confidence regions is the same and the two regions involve the same number of integer candidates. The confidence region of
$Q_{\hat a\hat a} $
, which has a larger minimum bounding rectangular, is thus more difficult to search. The figure also shows an example of
$\hat a$
and corresponding
$\hat z$
. It can be proven that the transformation does not change their distances to the origin; therefore,
$ \Vert {\hat a} \Vert _{Q_{\hat a\hat a}} ^2 = \Vert {\hat z} \Vert _{Q_{\hat z\hat z}} ^2 $
.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-47020-mediumThumb-S0373463316000047_fig2g.jpg?pub-status=live)
Figure 2. A two-dimensional example of ambiguity decorrelation with the integer Gaussian transformation method.
3. SUCCESS RATE COMPUTATION METHODS
The previous section has introduced the concept of integer estimation success rate. In this section we focus on the computational aspect of success rate.
3.1. Success Rate of Integer Rounding Estimator
For the integer rounding estimator, the pull-in region S R,z is defined by Equation (2). For the scalar case the success rate for the integer rounding estimator can be expressed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn11.gif?pub-status=live)
where
$\sigma _{\hat a} = \sqrt {Q_{{\hat a}{\hat a}}} $
. Function Φ(x) is a Cumulative Distributed Function (CDF) for the standard normal distribution, and is defined as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn12.gif?pub-status=live)
For a one-dimensional case, the IR success rate depends on
$Q_{\hat a\hat a} $
. If
$Q_{\hat a\hat a} $
is a diagonal matrix, then the IR success rate can be computed dimension by dimension using Equation (11), otherwise, it is difficult to calculate the IR success rate directly although S
R,z
is a regular region. In this case, a lower bound of the IR success rate can be obtained by ignoring the off-diagonal entries of
$Q_{\hat a\hat a} $
. After
$Q_{\hat a\hat a} $
is reduced to a diagonal matrix, the corresponding IR success rate can be computed dimension-wise. In this way, we actually obtain a lower bound of IR success rate, which is expressed as (Teunissen, Reference Teunissen1998b):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn13.gif?pub-status=live)
where
$f_{\hat a_i} (x)$
is the marginal PDF of
$f_{\hat a} (x)$
subject to the ith dimension.
$\underline {P(\breve {a} _{IR} = a)} $
means the lower bound of IR success rate.
The diagonalised vc-matrix
$Q'_{\hat a\hat a} $
contains only the marginal probability distribution information, so it is known as the marginal vc-matrix. The probability distribution of
$\hat a$
with full and marginal vc-matrices is depicted in Figure 3. The figure shows PDFs of
$Q_{\hat a\hat a} $
and
$Q'_{\hat a\hat a} $
in the left and right panels respectively. The ellipses are 95% confidence ellipses of the two vc-matrices and the dashed lines are the bounds of the ellipses. Due to loss of correlation information, the PDF on the right panel is more spread out than the one on the left panel. The blue squares are the IR pull-in regions. According to the figure, the float solution following the distribution on the right panel is more likely falling in the IR pull-in region, so the IR success rate calculated with the PDF on the right panel is lower than the actual IR success rate. If
$Q_{\hat a\hat a} $
is more diagonalised, the lower bound is closer to the true IR success rate.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-25545-mediumThumb-S0373463316000047_fig3g.jpg?pub-status=live)
Figure 3. Two-dimensional example of integer rounding success rate calculated with full vc-matrix
$Q_{\hat a\hat a} $
(left) and marginal vc-matrix
$Q'_{\hat a\hat a} $
(right).
3.2. Success Rate of Integer Bootstrapping Estimator
Defining the conditional vector
$\hat a' = [\hat a_1, \hat a_{2 \vert 1}, \cdots, \hat a_{n \vert N} ]^T $
, the PDF of
$\hat a$
can be expressed with the conditional vector PDF
$f_{\hat a'} (x)$
, given as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn14.gif?pub-status=live)
In Equation (14),
$ \vert Q_{\hat a\hat a} \vert = \vert D \vert $
since |L| = 1. The equation indicates that it is possible to calculate the IB success rate dimension by dimension since
$\hat a'$
has a diagonal vc-matrix. According to the definition of the IB pull-in region, the IB success rate can be expressed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn15.gif?pub-status=live)
Substituting the conditional variances to the equation, the IB success rate can be calculated by (Teunissen, Reference Teunissen1998b):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn16.gif?pub-status=live)
where
$f_{\hat a'_i} (x)$
is the marginal PDF of
$f_{\hat a'} (x)$
subject to ith dimension or conditional PDF of
$f_{\hat a_i} (x)$
subject to i − 1, i − 2, · · · , 1.
Distinguished from the lower bound of the IR success rate (see Equation (13)), there is no approximation in IB success rate calculation. A two-dimensional example of the IB success rate is depicted in Figure 4. The figure shows
$f_{\hat a} (x - a)$
and
$f_{\hat a'} (x - a)$
respectively. The ellipses are the 95% confidence ellipses of
$\hat a$
and
$\hat a'$
respectively. The dashed line shows the bounds of the ellipses. The area of the two ellipses is identical since
$ \vert Q_{\hat a\hat a} \vert = \vert D \vert $
. The blue parallelogram and square are the IB pull-in regions of
$\hat a$
and
$\hat a'$
. The parallelogram can be viewed as a distorted version of a square and the distortion is caused by cross correlation between ambiguity components. The correlation disappears after replacing
$\hat a$
by its conditional version
$\hat a'$
. Equation (14) indicates that the PDF
$f_{\hat a} (x - a)$
and
$f_{\hat a'} (x - a)$
are equivalent, so the success rate of IB on
$\hat a$
and
$\hat a'$
is the same. The right panel also shows the precision improvement of conditional variance on the second dimension. It also confirms that IR success rate calculated with Equation (13) is its lower bound. Meanwhile, the IB success rate can be adopted as an upper bound of IR success rate, given as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn17.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-06523-mediumThumb-S0373463316000047_fig4g.jpg?pub-status=live)
Figure 4. Two-dimensional example of integer bootstrapping success rate. The left panel shows
$f_{\hat a} (x - a)$
and corresponding IB pull-in region. The right panel shows
$f_{\hat a'} (x - a)$
and corresponding IB pull-in region.
The IB success rate can be accurately calculated, but it is non-unique since the volume-preserving transformation
$ \vert Q_{\hat a\hat a} \vert = \vert D \vert $
is non-unique. For example, the conditional vector may also start from the nth dimension. The conditional variance matrix D in this case is different from the one started from the first dimension. Different conditional variance corresponds to different conditional distribution, so the IB success rate is also different. The most popular way to calculate the IB success rate is sorting the diagonal entries of
$Q_{\hat a\hat a} $
by ascending order. Xu et al. (Reference Xu, Shi and Liu2012) also examined another more complicated sorting algorithm called Vertical Bell Labs Layered Space-Time (V-BLAST) and reported having a better performance and heavier computation burden.
Although it is impossible to find a unique IB success rate, it is still possible to identify the best IB success rate, which can be given as the upper bound of IB success rate. A volume-preserving transformation can be constructed to transform
$Q_{\hat a\hat a} $
to
$ \vert {Q_{\hat a\hat a}} \vert ^{\textstyle{1 \over n}} I_n $
. It is clear that the matrix
$ \vert { \vert {Q_{\hat a\hat a}} \vert ^{\textstyle{1 \over n}} I_n} \vert = \vert {Q_{\hat a\hat a}} \vert $
. In this case, the confidence region shape of
$Q_{\hat a\hat a} $
is transformed from a hyper-ellipsoid to a hypersphere. After transformation, the variance of each component can be given as
$\sigma _i^2 = \vert {Q_{\hat a\hat a}} \vert ^{\textstyle{1 \over n}} $
, then the Ambiguity Dilution Of Precision (ADOP) can be defined as (Teunissen, Reference Teunissen1997):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn18.gif?pub-status=live)
With the ADOP, the upper bounds of the IB success rate can be calculated as (Teunissen, Reference Teunissen1997; Teunissen and Odijk, Reference Teunissen and Odijk1997; Teunissen, Reference Teunissen2003):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn19.gif?pub-status=live)
The IB success rate calculated with ADOP is an invariant upper bound of the IB success rate. Its proof can be found in Teunissen (Reference Teunissen2003).
3.3. Success Rate of Integer Least-squares Estimator
Previous analysis indicates the integral of
$f_{\hat a} (x)$
over a regular region can be difficult, so calculating an ILS success rate would be more difficult since its pull-in region is more complicated. It is difficult to find the analytical ILS success rate, so a numerical solution is preferred. The numerical solution (Monte Carlo method) is a computationally extensive way to obtain the numerical ILS success rate and cannot easily meet the requirement of real-time applications. Alternatively, an easy-to-calculate upper/lower bound also makes sense in some applications. In this section, a number of upper and lower bounds of the ILS success rate are analysed, including the integer bootstrapping-based lower bound, the ellipsoidal upper/lower bounds, the eigenvalue-based upper/lower bound and the integration region-based upper bound.
3.3.1. Lower Bound Based on Integer Bootstrapping
Integer least-squares achieves the maximum success rate with a given vc-matrix (Teunissen, Reference Teunissen1999), that the ILS success rate is always higher than (or equal to) the IB success rate. Although the ILS success rate is fairly difficult to compute, the IB success rate can be easily and precisely calculated. Hence, the IB success rate can be used as a lower bound of ILS success rate, shown as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn20.gif?pub-status=live)
Combined with the decorrelation procedure, the IB success rate is considered as a tight lower bound of the ILS success rate (Thomsen, Reference Thomsen2000; Verhagen Reference Verhagen2003; Feng and Wang Reference Feng and Wang2011; Verhagen et al., Reference Verhagen, Li and Teunissen2013).
3.3.2. Ellipsoidal Upper and Lower Bounds
Although the ILS pull-in region is complicated, it is still possible to approximate it with an ellipsoid. Hassibi and Boyd (Reference Hassibi and Boyd1998) proposed an upper and lower bound of the ILS success rate based on ellipsoids, denoted as ellipsoidal upper and lower bounds in their discussion. The ellipsoids are actually spheres in the
$Q_{\hat a\hat a} $
spanned space, so the key is to find out the radius of the spheres.
The basic idea of ellipsoidal upper bounds is to construct a hypersphere with the same volume as
$Q_{\hat a\hat a} $
. The volume of n-dimensional sphere can be calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn21.gif?pub-status=live)
where V is the volume of the sphere. Γ(n) is the gamma function, which can be defined in a recursive form: Γ(1) = 1, Γ(n + 1) = nΓ(n),
$\Gamma (1/2) = \sqrt \pi $
. The volume of
$Q_{\hat a\hat a} $
is
$ \vert {Q_{\hat a\hat a}} \vert $
, but the volume of S
0,ILS
equals 1. In order to have the same volume, the integration volume in
$Q_{\hat a\hat a} $
spanned space can be set as
$\displaystyle{1 \over { \vert {Q_{\hat a\hat a}} \vert }}$
. If the integration region is a ball, then the radius of the ball can be calculated as
$r = \Big(\displaystyle{1 \over {\alpha _n \vert {Q_{\hat a\hat a}} \vert }}\Big)^{\textstyle{1 \over n}} $
.
$ \Vert {\hat a - a} \Vert _{Q_{\hat a\hat a}} ^2 $
follows a χ
2 (n, 0) distribution where n and 0 are the degree of freedom and the non-central parameter respectively. The upper bound of ILS pull-in region can be expressed as (Hassibi and Boyd, Reference Hassibi and Boyd1998):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn22.gif?pub-status=live)
Substituting ADOP into the equation, then the equation can be rewritten as (Teunissen, Reference Teunissen2000):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn23.gif?pub-status=live)
The upper bound based on the pull-in region approximation was examined and reported to be working well (Thomsen, Reference Thomsen2000; Verhagen, Reference Verhagen2003; Feng and Wang, Reference Feng and Wang2011).
The lower bound of the ILS success rate is sought by finding the inscribed ellipsoid of the ILS pull-in region. As discussed before, the boundaries of the ILS pull-in region are formed by half spaces perpendicular to the integer vector c = z
1 − z
2, z
1, z
2 ∈ ℤ
n
and passing the point
$\displaystyle{1 \over 2}c$
. The minimum distance between two integer vectors
$d_{min} = \min \Vert c \Vert _{Q_{\hat a\hat a}} $
can then be found. If the radius of the inscribed ellipses is
$\displaystyle{1 \over 2}d_{min} $
, the lower bound of the ILS success rate can be given as (Hassibi and Boyd, Reference Hassibi and Boyd1998; Teunissen, Reference Teunissen1998b)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn24.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-50877-mediumThumb-S0373463316000047_fig5g.jpg?pub-status=live)
Figure 5. A two-dimensional example of ILS success rate upper and lower bound ellipsoidal integration region.
3.3.3. Upper and Lower Bounds Based on Eigenvalue
Teunissen (Reference Teunissen1998b; Reference Teunissen2000a; Reference Teunissen2000b) proposed a pair of upper and lower bounds of ILS success rate based on eigenvalues. Instead of approximating the ILS success rate by bounding the integration region, these bounds are based on the probability distribution approximation. Two positive definite matrices can be compared with their quadratic forms, if
$f^T Q_1 f \ge f^T Q_2 f\forall f \in {\opf R}^n $
, then Q
1 ≥ Q
2. The
$\hat a$
with the smaller vc-matrix always has the larger ILS success rate (Teunissen, Reference Teunissen2000a; Reference Teunissen2000b).
The idea of these bounds is quite similar to the ADOP. The volume of the vc-matrix can be defined by its eigenvalues:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn25.gif?pub-status=live)
where λ = [λ
1, λ
2, · · · , λ
n
]
T
is the eigenvalue vector of
$Q_{\hat a\hat a} $
. The ADOP is the geometrical mean of the ambiguity standard deviation (Teunissen, Reference Teunissen1997; Odijk and Teunissen, Reference Odijk and Teunissen2008), so a group of upper and lower bounds of
$Q_{\hat a\hat a} $
can be identified with its eigenvalues. Defining λ
max
= max{λ
i
} and λ
min
= min{λ
i
}, corresponding upper and lower bounds of
$Q_{\hat a\hat a} $
can be constructed as Q
1 = λ
max
I
n
and Q
2 = λ
min
I
n
. In this study, an auxiliary matrix, Q
3 = ADOP
2
I
n
, is also constructed for comparison purposes. According to the previous analysis,
$ \vert Q_3 \vert = \vert Q_{\hat a\hat a} \vert $
.
A two-dimensional example is presented in Figure 6 to show the relationship between
$Q_{\hat a\hat a} $
, Q
1, Q
2 and Q
3. The three constructed matrices are identity matrices, so their confidence regions are circles in a two-dimensional case. The volume of Q
3 is the same as for
$Q_{\hat a\hat a} $
; the confidence ellipses of Q
1 and Q
2 are circumscribed and inscribed ellipses of
$Q_{\hat a\hat a} $
's confidence ellipse. The figure shows that Q
1 has poorer precision and Q
2 has better precision than
$Q_{\hat a\hat a} $
. The ILS success rate of Q
1 and Q
2 can be calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn26.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-81818-mediumThumb-S0373463316000047_fig6g.jpg?pub-status=live)
Figure 6. A two-dimensional example of the confidence ellipse of
$Q_{\hat a\hat a} $
and its upper and lower bound based on eigenvalue.
Then, the ILS success rate of
$Q_{\hat a\hat a} $
can be bounded as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn27.gif?pub-status=live)
The ILS success rate of Q 3 can be calculated with Equation (19). The ILS success rate of Q 3 is an upper bound of the integer bootstrapping success rate and it is an approximation of ILS success rate as well (Verhagen, Reference Verhagen2005; Verhagen and Teunissen, Reference Verhagen, Li and Teunissen2013).
3.3.4. Upper Bounds Based on Bounding Integration Region
Besides the elliptical integration region bounding, we still have other integration region bounding methods as discussed below.
Teunissen (Reference Teunissen1998a; Reference Teunissen1998b) proposed an upper bound of the ILS success rate based on reduction of the ILS pull-in region. The ILS pull-in region is bounded by the infinity planes orthogonal to the integer vector c. Actually, there are 2
n
− 1 pairs of valid bounding planes at the maximum, since one integer vector has only 2
n
− 1 pairs of adjacent integer vectors (Cassels, Reference Cassels2012). The definition of the ILS pull-in region (see Equation (6)) shows that the ILS pull-in region can also be interpreted as an overlap region of 2
n
− 1 bands centred at a with width
$ \Vert c \Vert _{Q_{\hat a\hat a}} $
. If fewer bands are used to intersect a pull-in region, then a looser upper bound
$U_a \supset S_a $
can be identified.
The ILS pull-in region can be written as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn28.gif?pub-status=live)
The left side of the inequality can be defined as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn29.gif?pub-status=live)
where c i ∈ ℤ n . If q independent integer vectors are chosen, vector v can be defined as v = [v 1, v 2, · · · , v q ] T .
Applying the variance propagation law, the corresponding vc-matrix of v can be written as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn30.gif?pub-status=live)
The vc-matrix Q vv is a q × q symmetric matrix, so LDL T decomposition can be applied and the corresponding success rate can be calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn31.gif?pub-status=live)
Conditional variance can be obtained by L T DL decomposition, which is similar to integer bootstrapping. The number of integer vectors q in practice can be given as n ≤ q ≤ 2 n − 1 (Verhagen, Reference Verhagen2003). When q = 2 n − 1, Equation (31) can be used to calculate the ILS success rate precisely, but 2 n − 1 increases dramatically as the ambiguity dimension increases. Therefore, the ILS success rate remains difficult to calculate in high dimension cases. The upper bound would be closer to the true ILS success rate with a larger q. A two-dimensional example of bounding ILS success rate with band intersection method is shown in Figure 7. For a two-dimensional case, the ILS pull-in region is an intersection of three bands. For example, if q = 2, then the ILS pull-in region can be approximated by the intersection of two bands (the blue region).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-78558-mediumThumb-S0373463316000047_fig7g.jpg?pub-status=live)
Figure 7. A two-dimensional example of the bounding ILS pull-in region using the band intersection method.
A practical issue of the band intersection is the selection of the integer vector set v.
$\forall v_i $
, v
j
∈ v, if i ≠ j, v
i
≠ λv
j
, where λ is an arbitrary non-zero real number. It shows any two vectors in the vector set v cannot be collinear. For the two-dimensional case, there are three pairs of adjacent integers at maximum for each integer and each pair of integers are collinear (e.g. [1, 0]
T
and [−1, 0]
T
in the figure are collinear). For this case, only one of them can be involved in the integer set v; thus the ILS pull-in region is an intersection of three bands, not six. Having the collinear integer vectors involved will cause duplicated integrations and will make the upper bound smaller than it should be.
4. NUMERICAL COMPARISONS
Different upper and lower bounds of the integer estimator success rate have been discussed, while the performance of these bounds is the issue truly of concern. Performance comparisons between these bounds have been extensively studied, such as Verhagen (Reference Verhagen2003), Feng and Wang (Reference Feng and Wang2011) and Verhagen and Teunissen, (Reference Verhagen, Li and Teunissen2013). However the performance of the success rate is still not fully understood. For example, the decorrelation procedure has been widely used in ambiguity estimation, but its impact on the success rate bounds has not yet been investigated. It is known that the ILS success rate is independent of the decorrelation procedure, but it does not mean that the bounds are decorrelation-invariant. In this section, the impact of the decorrelation procedure on the integer estimator success rate bounds is investigated.
4.1. Simulation Strategy
In order to examine the performance of the success rate bounds, a simulation-based comparison is carried out. The simulation scheme is briefly described in this section.
The medium baseline model is adopted in this simulation. The least-squares method is adopted to estimate the float solution, based on single epoch GPS observations. The elevation-dependent weighting strategy is used to capture the elevation-dependent observation noise and ionosphere noise, which is given as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463316000047:S0373463316000047_eqn32.gif?pub-status=live)
where w is the weight factor and E is the elevation angle in degrees.
In order to capture the satellite geometry impact, a 15° × 15° global-covering, evenly distributed ground tracking network is simulated. Considering the period of navigation satellites, we simulate 24 hours observation data from all monitor stations. The satellite geometry varies slowly, so the observation interval is set as 1800 seconds (half hour) to reduce computation loading. As a result, the data set including 12,600 epochs in total is used to describe the satellite geometry in different locations and at different times. We calculate
$\hat a$
and its vc-matrix
$Q_{\hat a\hat a} $
for each epoch in the data set and calculate the corresponding ILS success rate with the Monte Carlo method. For each
$Q_{\hat a\hat a} $
in the data set, 100,000 samples following
$N(0,Q_{\hat a\hat a} )$
are simulated to approximate its ILS success rate in the Monte Carlo method. More samples means more accurate computation precision in Monte Carlo simulation and a more detailed relationship between sample number and precision can be found in Verhagen and Teunissen, (Reference Verhagen, Li and Teunissen2013). In this study, we use 100,000 samples as a trade-off between precision and computational efficiency. Single frequency GPS data is simulated, with σ
P,z
= 10 cm, σ
φ,z
= 1 mm, σ
I,z
= 1 cm where σ
P,z
, σ
φ,z
and σ
I,z
are the standard deviations of code, phase and virtual ionosphere observations on zenith direction.
4.2. Evaluation of IR and IB Success Rate
The performance of the integer rounding and integer bootstrapping success rate bounds are first investigated and compared. In this study, the success rate calculated from Monte Carlo simulation is used as the reference and 100,000 samples are used in each trial. In this case, the simulation error impact on success rate is typically smaller than 0.001 (Verhagen and Teunissen, Reference Verhagen, Li and Teunissen2013). The success rates and their bounds are compared before and after decorrelation.
The IR success rate and its bounds are presented in Figure 8. The two panels show the IR success rates calculated from the same samples before and after decorrelation. At first, the figure shows the decorrelation process significantly improves IR success rate. The maximum IR success rate is improved from about 0·4 to about 0·98 after decorrelation. The IB success rate serves as an upper bound of IR success rate. Their maximum difference is reduced from about 0·7 to 0·15 after decorrelation. The calculated IR success rate lower bound also becomes tighter after decorrelation. The maximum difference between simulated IR success rate and its lower bound is reduced from 0·2 to less than 0·05. Therefore the decorrelation improves IR success rate and makes its upper and lower bound tighter. After decorrelation, the IR lower bound is a tight lower bound of the simulated IR success rate.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-25842-mediumThumb-S0373463316000047_fig8g.jpg?pub-status=live)
Figure 8. Upper and lower bound of integer rounding success rate before and after decorrelation.
The impact of decorrelation on integer bootstrapping success rate is illustrated in Figure 9. The IB success rate also increased significantly after decorrelation. The minimum IB success rate is increased from about 0·1 to 0·4 after decorrelation. The discrepancy between the IB success rate and ADOP-based upper bound decreases after decorrelation. The maximum discrepancy decreases from 0·8 to about 0·2. In most cases, the discrepancy is smaller than 0·1 after decorrelation. However, the improvement is solely contributed by the IB success rate improvement since the ADOP is invariant during the decorrelation procedure (Teunissen, Reference Teunissen2003). In conclusion, decorrelation procedure can improve IB success rate significantly and the ADOP-based upper bound is a tight IB success rate after performing the decorrelation procedure. The results also indicates that the improvement of sorting strategy on IB success rate after decorrelation is limited since the IB success rate cannot be higher than ADOP based upper bound no matter which sorting strategy is applied.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-95829-mediumThumb-S0373463316000047_fig9g.jpg?pub-status=live)
Figure 9. Relationship between IB success rate and its ADOP-based upper bound before and after decorrelation.
4.3. Evaluation of ILS Success Rate
The ILS success rate is independent of the decorrelation procedure and this is why the decorrelation can be used to accelerate the ILS. However, the upper and lower bounds of the ILS success rate are not necessarily invariant during the decorrelation. In this study, three groups of upper and lower bounds and one approximation method are considered. These methodologies have already been discussed.
At first, the ellipsoidal upper and lower bounds are evaluated and the results are presented in Figure 10. The figure indicates the discrepancy between ILS success rate and its ellipsoidal upper bound is smaller than 0·2 in most cases. However, it cannot be called a tight upper bound since the discrepancy reaches 0·4 for some cases. The ellipsoidal lower bound performs even worse than the upper bound. The discrepancy is larger than 0·1 even in the best case. The figure also indicates that the ellipsoidal upper and lower bounds are invariant from the decorrelation and the result is reasonable. For the ellipsoidal upper bound, the success rate is invariant because the volume of
$Q_{\hat a\hat a} $
does not change during the decorrelation procedure. The lower bound also does not change because
$ \Vert {\hat a - \breve {a}} \Vert _{Q_{\hat a\hat a}} ^2 = \Vert {\hat z - \breve{z}} \Vert _{Q_{\hat a\hat a}} ^2 $
. Therefore, d
min
does not change during decorrelation either. In conclusion, ellipsoidal upper and lower bounds are not tight bounds of ILS success rate, but ellipsoidal upper and lower bounds are invariant during decorrelation.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-25783-mediumThumb-S0373463316000047_fig10g.jpg?pub-status=live)
Figure 10. Ellipsoidal upper and lower bound of ILS success rate before and after decorrelation.
The eigenvalue-based upper and lower bound is evaluated and the results are presented in Figure 11. This figure indicates that the eigenvalue-based ILS success rate bounds are also not tight. Before decorrelation, the upper bounds are around one and the lower bounds are around zero for the majority of cases. The eigenvalues of
$Q_{\hat a\hat a} $
determine the axes of confidence ellipses. As shown in Figure 2, the shape of the confidence ellipse is changed during the decorrelation. Before decorrelation, the confidence ellipse is extremely elongated, so the eigenvalue-based upper and lower bounds are too rough to approximate the true ILS success rate. The performance of eigenvalue-based upper and lower bounds can be improved by decorrelation. The lower bound is significantly improved and the minimum discrepancy is reduced to about 0·2. However, they are still not tight enough even after decorrelation. Therefore, the eigenvalue-based upper and lower bound are not tight bounds of ILS success rate and they are not recommended in practice.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-36568-mediumThumb-S0373463316000047_fig11g.jpg?pub-status=live)
Figure 11. Eigenvalue-based upper and lower bound of ILS success rate before and after decorrelation.
Performance of the band intersection upper bound and IB lower bound is evaluated and the results are presented in Figure 12. In this study, the number of band q = n and n is the ambiguity dimension. As discussed, larger q means tighter bounds, but also means a heavier computation load. The figure shows the band intersection upper bound is a tight bound before decorrelation and the maximum discrepancy is about 0·2. After decorrelation, the success rate of band intersection upper bound increases, but this impact makes the band intersection method become a less tight upper bound of ILS success rate. The impact of decorrelation on band intersection method can be demonstrated by Figure 13. This figure shows the bounding region and ILS pull-in region are elongated in the high-correlation case. With a properly chosen band, the difference between the bounding region and the ILS pull-in region can be small. The decorrelation procedure gives the ILS pull-in region a better geometry, but also makes band intersection over-large. Since the PDFs of
$\hat a$
around ILS pull-in region vertex are similar, a larger area difference also means larger success rate discrepancy (Wang, Reference Wang2015). On the other hand, the IB success rate also increases after decorrelation, while the increased success rate makes IB success rate become a tight lower bound of ILS success rate. The maximum difference between IB success rate and ILS success rate is decreased from 0·85 to less than 0·05. After decorrelation, the IB success rate is the tightest lower bound of the ILS success rate.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-21118-mediumThumb-S0373463316000047_fig12g.jpg?pub-status=live)
Figure 12. Band intersection upper bound and IB lower bound of ILS success rate before and after decorrelation.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-58992-mediumThumb-S0373463316000047_fig13g.jpg?pub-status=live)
Figure 13. Integration bounding region versus the ILS pull-in region in high and low correlation case.
The ADOP-based IB success rate upper bound can also be used as an approximation of ILS success rate. An examination of ADOP-based ILS success rate approximation is presented in Figure 14. As discussed, ADOP-based ILS success rate approximation is independent from the decorrelation procedure. For most cases, the discrepancy between the ADOP approximation and the ILS success rate varies between −0·1 and 0·2. The extreme discrepancies reach about 0·3.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160945-73443-mediumThumb-S0373463316000047_fig14g.jpg?pub-status=live)
Figure 14. ADOP-based ILS success rate approximation before and after decorrelation.
It can be seen that the success rate bounds may change when different decorrelation methods are applied. In this study, the integer Gaussian transformation method is used for decorrelation. However, the integer Gaussian transformation is not the only decorrelation method, so the performance of these success rate bounds with different decorrelation, e.g. LLL decorrelation, is still worth investigating.
5. CONCLUSIONS AND RECOMMENDATIONS
In our analysis, success rates of integer rounding, integer bootstrapping and integer least-squares and their bounds have been systematically studied. According to the numerical results, decorrelation has significant impact on success rate and its bounds. Key finds of this study are summarised as follows. Decorrelation procedure can improve IR and IB success rates substantially, but it cannot improve ILS success rate. Decorrelation can reduce the discrepancy between IR success rate and its lower bounds. After decorrelation, the lower bound of IR success rate becomes a tight lower bound of the true IR success rate. Decorrelation also increases IB success rate and the maximum improvement reaches 0·8. After decorrelation, the IB success rate is closer to its ADOP-based upper bound. The ellipsoidal upper and lower bound is invariant during the decorrelation, but they are not tight bounds of ILS success rate. Decorrelation procedure can improve the eigenvalue-based ILS bound, but it is still not tight enough after decorrelation. Decorrelation procedure degrades the band intersection upper bound and improves IB lower bound. After decorrelation, IB success rate is the tightest lower bound of ILS success rate. Band intersection upper bound performs best in ILS success rate upper bounds without decorrelation. ADOP-based success rate upper bound is an approximation of ILS success rate and its value does not change during decorrelation.
ACKNOWLEDGEMENT
This research is conducted with financial support from cooperative research centre for spatial information (CRC-SI) project 1·01 ‘New carrier phase processing strategies for achieving precise and reliable multi-satellite, multi-frequency GNSS/RNSS positioning in Australia’.