Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-06T10:27:21.289Z Has data issue: false hasContentIssue false

Impact of Decorrelation on Success Rate Bounds of Ambiguity Estimation

Published online by Cambridge University Press:  28 March 2016

Lei Wang*
Affiliation:
(Queensland University of Technology, Brisbane, Australia) (School of Geodesy and Geomatics, Wuhan University, China)
Yanming Feng
Affiliation:
(Queensland University of Technology, Brisbane, Australia)
Jiming Guo
Affiliation:
(School of Geodesy and Geomatics, Wuhan University, China)
Charles Wang
Affiliation:
(Queensland University of Technology, Brisbane, Australia)
*
Rights & Permissions [Opens in a new window]

Abstract

Reliability is an important performance measure of navigation systems and this is particularly true in Global Navigation Satellite Systems (GNSS). GNSS positioning techniques can achieve centimetre-level accuracy which is promising in navigation applications, but can suffer from the risk of failure in ambiguity resolution. Success rate is used to measure the reliability of ambiguity resolution and is also critical in integrity monitoring, but it is not always easy to calculate. Alternatively, success rate bounds serve as more practical ways to assess the ambiguity resolution reliability. Meanwhile, a transformation procedure called decorrelation has been widely used to accelerate ambiguity estimations. In this study, the methodologies of bounding integer estimation success rates and the effect of decorrelation on these success rate bounds are examined based on simulation. Numerical results indicate decorrelation can make most success rate bounds tighter, but some bounds are invariant or have their performance degraded after decorrelation. This study gives a better understanding of success rate bounds and helps to incorporate decorrelation procedures in success rate bounding calculations.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2016 

1. INTRODUCTION

Global navigation satellite systems (GNSS) provide global precise positioning services and have become a mainstream technique of navigation. The key to quickly acquiring precise position with GNSS is known as the ambiguity resolution, which resolves an unknown cycle number in the carrier phase observation to integer. The ambiguity resolution enables instantaneous precise positioning, but also introduces reliability risk. Therefore, how to reliably resolve the integer ambiguity is one of the most concerning issues in the GNSS and navigation research communities (Feng et al., Reference Feng, Ochieng, Samson, Tossaint, Hernández-Pajares, Juan, Sanz, Aragón-Àngel, Ramos-Bosch and Jofre2012). Ambiguity resolution includes two procedures: ambiguity estimation and ambiguity validation (Teunissen, Reference Teunissen1995). The reliability of ambiguity estimation is measured by success rate, e.g. (Li and Teunissen Reference Li and Teunissen2011; Feng et al., Reference Feng, Ochieng, Samson, Tossaint, Hernández-Pajares, Juan, Sanz, Aragón-Àngel, Ramos-Bosch and Jofre2012). A realistic success rate is also important in integrity monitoring (Li et al., Reference Li, Li, Yuan, Wang and Hou2015). Unfortunately, success rate is not always easy to calculate, so bounding the success rates is used as a practical way to assess the reliability (Teunissen Reference Teunissen1998a; Reference Teunissen1998b). Meanwhile, a transformation procedure called reduction or decorrelation is often performed before ambiguity estimation to improve its efficiency and performance (Teunissen, Reference Teunissen1995; Hassibi and Boyd, Reference Hassibi and Boyd1998). The impact of decorrelation on the integer estimation success rate has been studied (Thomsen, Reference Thomsen2000; Verhagen, Reference Verhagen2003; Feng and Wang, Reference Feng and Wang2011; Verhagen and Teunissen, Reference Verhagen, Li and Teunissen2013), but the impact on the success rate bounds has not yet been systematically studied.

In this study, the methodologies of bounding integer estimator success rate are revisited and the performance of these bounds is evaluated. Particularly, the effect of decorrelation on the success rate bounds performance is addressed, which gives a better understanding of success rate bounds calculation.

2. PROCEDURE OF AMBIGUITY ESTIMATION

The carrier phase-based GNSS precise positioning model can be expressed as a mixed model:

(1) $$\eqalign{& E(y) = Aa + Bb \cr & D(y) = Q_{yy}} $$

where E(·) and D(·) are the mathematical expectation and dispersion operators respectively. a and b are integers and real-valued parameters respectively. The observation vector y is assumed to follow multivariate normal distribution and its variance-covariance (vc-) matrix is given as Q yy  . The solution of this mixed integer model can be addressed in three steps:

  • Estimating real-valued parameters $\hat a$ , $\hat b$ and corresponding vc-matrix with a standard least-squares procedure. In this step, the integer nature of a is not considered.

  • Ambiguity resolution. Mapping the real-valued ambiguity parameter $\hat a$ to integer ambiguity $\breve {a} $ with an integer estimators and validate the correctness of $\breve {a} $ .

  • Updating the real-valued parameters with $\breve {b} = \hat b - Q_{\hat b\hat a} Q_{\hat a\hat a}^{ - 1} (\hat a - \breve {a} )$ .

2.1. Integer Estimation and its Success Rate

Our focus is how to map the real-valued ambiguity parameter $\hat a$ to an integer ambiguity $\breve {a} $ and this procedure is also known as integer estimation. There are three distinct estimators to map the real-valued ambiguity parameter $\hat a$ to integer ambiguity $\breve {a} $ , known as Integer Rounding (IR), Integer Bootstrapping (IB) and Integer Least-Squares (ILS) respectively. Each integer estimator uniquely defines a region, called ‘pull-in region’ and each pull-in region only contains one integer vector. If the real-valued ambiguity parameter $\hat a$ falls in a particular pull-in region, then it will be fixed to a corresponding integer vector. The pull-in region of integer rounding is defined as (Teunissen, Reference Teunissen1998b):

(2) $${S_{IR,z}} = \bigcap\limits_{i = 1}^n \left\{{ x \in {{\open R}^n}\vert{\left\Vert{{x_i} - {z_i}}\right\Vert} \le \displaystyle{1 \over 2}}\right\}, \forall z \in {{\opf Z}^n} $$

where S IR,z is the pull-in region of the integer rounding estimator centred at z. x i is the ith component of x. ℝ n and ℤ n are the n-dimensional real-valued space and integer space respectively. The integer rounding estimator simply maps the real-valued ambiguity to the nearest integer dimension by dimension, so its pull-in region is a hyper-cube. A two-dimensional example of integer rounding pull-in region is shown in Figure 1. However the integer rounding estimator does not perform well in practice due to strong correlation between ambiguity components.

Figure 1. A demonstration of integer estimator pull-in region in a two-dimensional case; the presented pull-in regions are the integer rounding (left), the integer bootstrapping (centre) and the integer least-squares (right).

The integer bootstrapping estimator considers the cross correlation between ambiguity components by employing a sequential rounding procedure, which is defined as:

(3) $$\eqalign{& {\breve a}_{IB,1} = \left\langle{{\hat a}_1}\right\rangle \cr & {\breve a}_{IB,2} = \left\langle{{\hat a}_{2 \vert 1}}\right\rangle = \left\langle{{\hat a}_2 - \sigma_{2,1}\sigma_{1,1}^{ - 2} ({\hat a}_1 - {\breve a}_{IB,1})}\right\rangle \cr & \vdots \cr & {\breve a}_{IB,n} = \left\langle{{\hat a}_{n \vert N}}\right\rangle \left\langle{{\hat a}_n - \sum\limits_{i = 1}^{n - 1} \sigma_{n,i \vert I} \sigma_{i \vert I,i \vert I}^{ - 2} ({\hat a}_{i \vert I} - {\breve a}_{IB,i})}\right\rangle}$$

where 〈·〉 means rounding to the nearest integer. $\breve {a} _{B,i} $ is the ith component of the fixed ambiguity vector by the integer bootstrapping estimator. Correspondingly, the pull-in region of the integer bootstrapping estimator can be expressed as (Teunissen, Reference Teunissen1999; Reference Teunissen2001):

(4) $$S_{IB,z} = \bigcap\limits_{i = 1}^n {\left\{ x \in {\open R}^n \vert \Vert {c_i^T L^{ - 1} (x - z)} \Vert \le \displaystyle{1 \over 2}\right\}}, \;\forall z \in {\rm {\opf Z}}^n $$

where c i is a n × 1 canonical unit vector with its ith entry equals to 1 and the remaining entries equal 0. L is a lower triangular matrix which fulfils $Q_{\hat a\hat a} = LDL^T $ . Since the integer bootstrapping estimator employs the conditional variance, its pull-in region is a parallelogram in two-dimensional cases (see Figure 1).

The integer least-squares estimator addresses the integer by minimising the quadratic form, which is given as (Teunissen, Reference Teunissen1993):

(5) $${\breve a}_{ILS} = \mathop {arg} \min \limits_a \{ \Vert {\hat a} - a \Vert_{Q_{\hat a}{\hat a}}^2 \}, a \in {\opf Z}^n $$

The corresponding pull-in region is given as:

(6) $$S_{ILS,0} = \left\{ \hat a \in {\opf R}^n \vert w \le \displaystyle{1 \over 2} \Vert z \Vert _{Q_{\hat a\hat a}}, \forall z \in {\rm {\opf Z}}^n \right\}, \;w = \displaystyle{{z^T Q_{\hat a\hat a}^{ - 1} (\hat a - a)} \over { \Vert z \Vert _{Q_{\hat a\hat a}}}} $$

The pull-in region of integer least-squares is defined by a series of projectors w. In two-dimensional case, it is a hexagon. The minimising problem Equation (5) cannot be solved directly, so a searching procedure is employed to find the optimal solution.

2.2. The Concept of Integer Estimation Success Rate

The pull-in region theory interprets the integer estimation from a geometrical prospective, but evaluating integer estimator performance by the pull-in regions is difficult. In this section, the integer estimation theory is investigated from the probability perspective, along with the method to evaluate the integer estimator reliability.

Since the observation vector y follows a normal distribution, the estimated float ambiguity parameter also follows a multivariate normal distribution, denoted as $\hat a\sim N(a,Q_{\hat a\hat a} )$ . The mathematical expectation of $\hat a$ is an unknown integer vector. The Probability Density Function (PDF) of $\hat a$ is expressed as:

(7) $$f_{\hat a} (x) = \displaystyle{1 \over {\sqrt { \vert Q_{\hat a\hat a} \vert (2\pi )^n}}} \exp \left\{ - \displaystyle{1 \over 2} \Vert x \Vert _{Q_{\hat a\hat a}} ^2 \right\} $$

where |·| denotes the determinant operator. The stochastic characteristic of real-valued ambiguity is uniquely described by its vc-matrix $Q_{\hat a\hat a} $ . The probability of $\hat a$ falling in the pull-in region S a , which is known as the ambiguity estimation success rate, can be calculated by integral $f_{\hat a} (x)$ over S a and denoted as:

(8) $$P_s = P(\breve {a} = a) =\; \int_{S_a} {\,f_{\hat a} (x)dx} $$

where $\breve {a} $ is the fixed integer ambiguity vector. The equation indicates that the higher success rate means the real-valued ambiguity vector is more likely to be fixed to the true integer vector. Therefore the success rate is an important reliability indicator in ambiguity estimation. The success rate P S depends on $f_{\hat a} (x)$ and S a , and hence it can be used to evaluate the performance of the integer estimator with given $Q_{\hat a\hat a} $ or to evaluate the strength of the underlying model with the given integer estimator.

2.3. The Essence of Decorrelation

For the fast or even instantaneous ambiguity resolution case, the ambiguities are highly correlated. The search space is extremely elongated, which makes the search procedure in integer least-squares inefficient. In order to improve search efficiency, a decorrelation approach proposed by Teunissen (Reference Teunissen1993) is now widely used in GNSS ambiguity estimation. The combination of decorrelation and the integer least-squares is known as the Least-squares Ambiguity Decorrelation Adjustment (LAMBDA). The decorrelation procedure not only improves ILS efficiency, but also improves performance of IR and IB estimators.

The basic idea of the decorrelation is transforming $\hat a$ and $Q_{\hat a\hat a} $ with an invertible transformation, given as:

(9) $$\hat z = Z^T \hat a,Q_{\hat z\hat z} = Z^T Q_{\hat a\hat a} Z$$

Then, integer estimation is performed with $\hat z$ and $Q_{\hat z\hat z} $ . After the best integer candidate $\breve{z} $ is identified, the best integer candidate $\breve {a} $ can also be obtained by performing an inverse transformation:

(10) $$\breve {a} = Z^{ - T} \breve {z}, Q_{\breve {a} \breve {a}} = Z^{ - T} Q_{\breve {z} \breve {z}} Z^{ - 1} $$

Equations (9) and (10) indicate the transformation matrix Z has to be invertible. Besides this, there are two conditions to be an admissible ambiguity transformation (Teunissen, Reference Teunissen1995): integer matrix and volume preserving.

  • The integer matrix means Z ∈ ℤ n×n and Z −1 ∈ ℤ n×n . This condition guarantees that the transformation preserves the integer nature of the ambiguity parameters.

  • The volume preserving refers to $ \vert {Q_{\hat a\hat a}} \vert = \vert {Q_{\hat z\hat z}} \vert $ . This condition guarantees $ \Vert {\hat a - \breve {a}} \Vert _{Q_{\hat a\hat a}} ^2 = \Vert {\hat z - \breve {z}} \Vert _{Q_{\hat z\hat z}} ^2 $ . Hence, the transformation does not change the search result.

In decorrelation, the third condition on the Z matrix selection is that the off-diagonal entries of $Q_{\hat z\hat z} $ are no larger than their counterparts in the $Q_{\hat a\hat a} $ matrix. This condition ensures the transformation is a decorrelation procedure rather than an increase in the correlation procedure (Xu et al., Reference Xu, Cannon and Lachapelle1995)

According to the second condition, | Z| = ±1. Therefore the transformation matrix Z is a unimodular matrix (Cassels, Reference Cassels2012). Most decorrelation methods construct the unimodular matrix with a triangular matrix with its pivotal entries equal to 1. According to Cramer's rule (e.g. see Strang and Borre, Reference Strang and Borre1997), an integer unimodular matrix also has its inverse matrix as an integer matrix.

Decorrelation can be implemented by two distinct methods: the integer Gaussian transformation (Teunissen, Reference Teunissen1993) and Lenstra, Lenstra, Lovasz (LLL) method (Hassibi and Boyd, Reference Hassibi and Boyd1998). The integer Gaussian transformation employs a series of preliminary Gaussian transformations, each transformation decorrelating one entry in $Q_{\hat a\hat a} $ . The details of the integer Gaussian transformation method are discussed in De Jonge and Tiberius (Reference De Jonge and Tiberius1996). The LLL method employs a vector-based reduction, which is a modified Gram-Schmidt orthogonalisation method. The details of the LLL method can be found in Grafarend (Reference Grafarend2000) and Xu (Reference Xu2001). It is noted that the integer Gaussian transformation method also involves a permutation procedure to flatten the condition spectrum, which also improves the search procedure efficiently (Teunissen, Reference Teunissen1995). After the permutation, the transformation Z is not necessarily a triangular matrix any longer, but it is still a unimodular integer matrix. Recently, the importance of permutation has been systematically studied. Xu et al. (Reference Xu, Shi and Liu2012) compared the impact of different permutation strategies on decorrelation performance. The permutation procedure has been applied to the LLL method, e.g. Jazaeri et al. (Reference Jazaeri, Amiri-Simkooei and Sharifi2012), but its performance is still not as good as the integer Gaussian transformation (Jazaeri et al., Reference Jazaeri, Amiri-Simkooei and Sharifi2014).

A two-dimensional example of ambiguity decorrelation is illustrated in Figure 2. The figure shows the 95% confidence regions of $Q_{\hat a\hat a} $ and $Q_{\hat z\hat z} $ , which reflect the impact of decorrelation on the distribution of $\hat a$ . It is noted that the volume of the two confidence regions is the same and the two regions involve the same number of integer candidates. The confidence region of $Q_{\hat a\hat a} $ , which has a larger minimum bounding rectangular, is thus more difficult to search. The figure also shows an example of $\hat a$ and corresponding $\hat z$ . It can be proven that the transformation does not change their distances to the origin; therefore, $ \Vert {\hat a} \Vert _{Q_{\hat a\hat a}} ^2 = \Vert {\hat z} \Vert _{Q_{\hat z\hat z}} ^2 $ .

Figure 2. A two-dimensional example of ambiguity decorrelation with the integer Gaussian transformation method.

3. SUCCESS RATE COMPUTATION METHODS

The previous section has introduced the concept of integer estimation success rate. In this section we focus on the computational aspect of success rate.

3.1. Success Rate of Integer Rounding Estimator

For the integer rounding estimator, the pull-in region S R,z is defined by Equation (2). For the scalar case the success rate for the integer rounding estimator can be expressed as:

(11) $$P(\breve {a} _{IR} = a) = \;\int_{ - 0.5}^{0.5} {\,f_{\hat a} (x - a)dx = 2\Phi \left(\displaystyle{1 \over {2\sigma _{\hat a}}} \right)} - 1$$

where $\sigma _{\hat a} = \sqrt {Q_{{\hat a}{\hat a}}} $ . Function Φ(x) is a Cumulative Distributed Function (CDF) for the standard normal distribution, and is defined as:

(12) $$\Phi (x) = \; \int_{ - \infty} ^x {\displaystyle{1 \over {\sqrt {2\pi}}} \exp \left\{ - \displaystyle{1 \over 2}z^2 \right\} dz} $$

For a one-dimensional case, the IR success rate depends on $Q_{\hat a\hat a} $ . If $Q_{\hat a\hat a} $ is a diagonal matrix, then the IR success rate can be computed dimension by dimension using Equation (11), otherwise, it is difficult to calculate the IR success rate directly although S R,z is a regular region. In this case, a lower bound of the IR success rate can be obtained by ignoring the off-diagonal entries of $Q_{\hat a\hat a} $ . After $Q_{\hat a\hat a} $ is reduced to a diagonal matrix, the corresponding IR success rate can be computed dimension-wise. In this way, we actually obtain a lower bound of IR success rate, which is expressed as (Teunissen, Reference Teunissen1998b):

(13) $$\underline {P(\breve {a} _{IR} = a)} = \prod\limits_{i = 1}^n {\left( {\int_{ - {1 \over 2}}^{{1 \over 2}} {\,f_{\hat a_i}} (x - a)dx} \right) = \prod\limits_{i = 1}^n {\left(2\Phi \left(\displaystyle{1 \over {2\sigma _{\hat a_i}}} \right) - 1\right)}} $$

where $f_{\hat a_i} (x)$ is the marginal PDF of $f_{\hat a} (x)$ subject to the ith dimension. $\underline {P(\breve {a} _{IR} = a)} $ means the lower bound of IR success rate.

The diagonalised vc-matrix $Q'_{\hat a\hat a} $ contains only the marginal probability distribution information, so it is known as the marginal vc-matrix. The probability distribution of $\hat a$ with full and marginal vc-matrices is depicted in Figure 3. The figure shows PDFs of $Q_{\hat a\hat a} $ and $Q'_{\hat a\hat a} $ in the left and right panels respectively. The ellipses are 95% confidence ellipses of the two vc-matrices and the dashed lines are the bounds of the ellipses. Due to loss of correlation information, the PDF on the right panel is more spread out than the one on the left panel. The blue squares are the IR pull-in regions. According to the figure, the float solution following the distribution on the right panel is more likely falling in the IR pull-in region, so the IR success rate calculated with the PDF on the right panel is lower than the actual IR success rate. If $Q_{\hat a\hat a} $ is more diagonalised, the lower bound is closer to the true IR success rate.

Figure 3. Two-dimensional example of integer rounding success rate calculated with full vc-matrix $Q_{\hat a\hat a} $ (left) and marginal vc-matrix $Q'_{\hat a\hat a} $ (right).

3.2. Success Rate of Integer Bootstrapping Estimator

Defining the conditional vector $\hat a' = [\hat a_1, \hat a_{2 \vert 1}, \cdots, \hat a_{n \vert N} ]^T $ , the PDF of $\hat a$ can be expressed with the conditional vector PDF $f_{\hat a'} (x)$ , given as:

(14) $$f_{\hat a} (x) = \displaystyle{1 \over {\sqrt {\left \vert {Q_{\hat a\hat a}} \right \vert (2\pi )^n}}} \exp \left\{ - \displaystyle{1 \over 2} \Vert {\hat a^{\prime}} \Vert _{Q_{\hat a\hat a}} ^2 \right\} = \displaystyle{1 \over {\sqrt { \vert D \vert (2\pi )^n}}} \exp \left\{ - \displaystyle{1 \over 2} \Vert {\hat a^{\prime}} \Vert _D^2 \right\} = f_{\hat a^{\prime}} (x)$$

In Equation (14), $ \vert Q_{\hat a\hat a} \vert = \vert D \vert $ since |L| = 1. The equation indicates that it is possible to calculate the IB success rate dimension by dimension since $\hat a'$ has a diagonal vc-matrix. According to the definition of the IB pull-in region, the IB success rate can be expressed as:

(15) $$P(\breve {a} _{IB} = a) = P\left(\bigcap\limits_{i = 1}^n { \Vert {\hat a_{i \vert I} - a_i} \Vert } \le \displaystyle{1 \over 2}\right)$$

Substituting the conditional variances to the equation, the IB success rate can be calculated by (Teunissen, Reference Teunissen1998b):

(16) $$P(\breve {a} _{IB} = a) = \prod\limits_{i = 1}^n {\left( {\int_{ - {1 \over 2}}^{{1 \over 2}} {\,f_{\hat a^{\prime}_i}} (x - a)dx} \right) = \prod\limits_{i = 1}^n {\left(2\Phi \left(\displaystyle{1 \over {2\sigma _{\hat a_i \vert I}}} \right) - 1\right)}} $$

where $f_{\hat a'_i} (x)$ is the marginal PDF of $f_{\hat a'} (x)$ subject to ith dimension or conditional PDF of $f_{\hat a_i} (x)$ subject to i − 1, i − 2, · · · , 1.

Distinguished from the lower bound of the IR success rate (see Equation (13)), there is no approximation in IB success rate calculation. A two-dimensional example of the IB success rate is depicted in Figure 4. The figure shows $f_{\hat a} (x - a)$ and $f_{\hat a'} (x - a)$ respectively. The ellipses are the 95% confidence ellipses of $\hat a$ and $\hat a'$ respectively. The dashed line shows the bounds of the ellipses. The area of the two ellipses is identical since $ \vert Q_{\hat a\hat a} \vert = \vert D \vert $ . The blue parallelogram and square are the IB pull-in regions of $\hat a$ and $\hat a'$ . The parallelogram can be viewed as a distorted version of a square and the distortion is caused by cross correlation between ambiguity components. The correlation disappears after replacing $\hat a$ by its conditional version $\hat a'$ . Equation (14) indicates that the PDF $f_{\hat a} (x - a)$ and $f_{\hat a'} (x - a)$ are equivalent, so the success rate of IB on $\hat a$ and $\hat a'$ is the same. The right panel also shows the precision improvement of conditional variance on the second dimension. It also confirms that IR success rate calculated with Equation (13) is its lower bound. Meanwhile, the IB success rate can be adopted as an upper bound of IR success rate, given as:

(17) $$\overline {P(\breve {a} _{IR} = a)} = P(\breve {a} _{IB} = a)$$

Figure 4. Two-dimensional example of integer bootstrapping success rate. The left panel shows $f_{\hat a} (x - a)$ and corresponding IB pull-in region. The right panel shows $f_{\hat a'} (x - a)$ and corresponding IB pull-in region.

The IB success rate can be accurately calculated, but it is non-unique since the volume-preserving transformation $ \vert Q_{\hat a\hat a} \vert = \vert D \vert $ is non-unique. For example, the conditional vector may also start from the nth dimension. The conditional variance matrix D in this case is different from the one started from the first dimension. Different conditional variance corresponds to different conditional distribution, so the IB success rate is also different. The most popular way to calculate the IB success rate is sorting the diagonal entries of $Q_{\hat a\hat a} $ by ascending order. Xu et al. (Reference Xu, Shi and Liu2012) also examined another more complicated sorting algorithm called Vertical Bell Labs Layered Space-Time (V-BLAST) and reported having a better performance and heavier computation burden.

Although it is impossible to find a unique IB success rate, it is still possible to identify the best IB success rate, which can be given as the upper bound of IB success rate. A volume-preserving transformation can be constructed to transform $Q_{\hat a\hat a} $ to $ \vert {Q_{\hat a\hat a}} \vert ^{\textstyle{1 \over n}} I_n $ . It is clear that the matrix $ \vert { \vert {Q_{\hat a\hat a}} \vert ^{\textstyle{1 \over n}} I_n} \vert = \vert {Q_{\hat a\hat a}} \vert $ . In this case, the confidence region shape of $Q_{\hat a\hat a} $ is transformed from a hyper-ellipsoid to a hypersphere. After transformation, the variance of each component can be given as $\sigma _i^2 = \vert {Q_{\hat a\hat a}} \vert ^{\textstyle{1 \over n}} $ , then the Ambiguity Dilution Of Precision (ADOP) can be defined as (Teunissen, Reference Teunissen1997):

(18) $$ADOP = \left \vert {Q_{\hat a\hat a}} \right \vert ^{\textstyle{1 \over {2n}}} $$

With the ADOP, the upper bounds of the IB success rate can be calculated as (Teunissen, Reference Teunissen1997; Teunissen and Odijk, Reference Teunissen and Odijk1997; Teunissen, Reference Teunissen2003):

(19) $$P(\breve {a} _{IB} = a) \le \left(2\Phi \left(\displaystyle{1 \over {2ADOP}}\right) - 1\right)^n $$

The IB success rate calculated with ADOP is an invariant upper bound of the IB success rate. Its proof can be found in Teunissen (Reference Teunissen2003).

3.3. Success Rate of Integer Least-squares Estimator

Previous analysis indicates the integral of $f_{\hat a} (x)$ over a regular region can be difficult, so calculating an ILS success rate would be more difficult since its pull-in region is more complicated. It is difficult to find the analytical ILS success rate, so a numerical solution is preferred. The numerical solution (Monte Carlo method) is a computationally extensive way to obtain the numerical ILS success rate and cannot easily meet the requirement of real-time applications. Alternatively, an easy-to-calculate upper/lower bound also makes sense in some applications. In this section, a number of upper and lower bounds of the ILS success rate are analysed, including the integer bootstrapping-based lower bound, the ellipsoidal upper/lower bounds, the eigenvalue-based upper/lower bound and the integration region-based upper bound.

3.3.1. Lower Bound Based on Integer Bootstrapping

Integer least-squares achieves the maximum success rate with a given vc-matrix (Teunissen, Reference Teunissen1999), that the ILS success rate is always higher than (or equal to) the IB success rate. Although the ILS success rate is fairly difficult to compute, the IB success rate can be easily and precisely calculated. Hence, the IB success rate can be used as a lower bound of ILS success rate, shown as:

(20) $$\underline {P_{s,ILS}} = P_{s,IB} = \prod\limits_{i = 1}^n \left(2\Phi \left(\displaystyle{1 \over {2\sigma _{\hat a \vert I}}} \right) - 1\right)$$

Combined with the decorrelation procedure, the IB success rate is considered as a tight lower bound of the ILS success rate (Thomsen, Reference Thomsen2000; Verhagen Reference Verhagen2003; Feng and Wang Reference Feng and Wang2011; Verhagen et al., Reference Verhagen, Li and Teunissen2013).

3.3.2. Ellipsoidal Upper and Lower Bounds

Although the ILS pull-in region is complicated, it is still possible to approximate it with an ellipsoid. Hassibi and Boyd (Reference Hassibi and Boyd1998) proposed an upper and lower bound of the ILS success rate based on ellipsoids, denoted as ellipsoidal upper and lower bounds in their discussion. The ellipsoids are actually spheres in the $Q_{\hat a\hat a} $ spanned space, so the key is to find out the radius of the spheres.

The basic idea of ellipsoidal upper bounds is to construct a hypersphere with the same volume as $Q_{\hat a\hat a} $ . The volume of n-dimensional sphere can be calculated as:

(21) $$V = \alpha _n r^n = \displaystyle{{\pi ^{{n \over 2}}} \over {\Gamma \left(\displaystyle{n \over 2} + 1\right)}}r^n $$

where V is the volume of the sphere. Γ(n) is the gamma function, which can be defined in a recursive form: Γ(1) = 1, Γ(n + 1) = nΓ(n), $\Gamma (1/2) = \sqrt \pi $ . The volume of $Q_{\hat a\hat a} $ is $ \vert {Q_{\hat a\hat a}} \vert $ , but the volume of S 0,ILS equals 1. In order to have the same volume, the integration volume in $Q_{\hat a\hat a} $ spanned space can be set as $\displaystyle{1 \over { \vert {Q_{\hat a\hat a}} \vert }}$ . If the integration region is a ball, then the radius of the ball can be calculated as $r = \Big(\displaystyle{1 \over {\alpha _n \vert {Q_{\hat a\hat a}} \vert }}\Big)^{\textstyle{1 \over n}} $ . $ \Vert {\hat a - a} \Vert _{Q_{\hat a\hat a}} ^2 $ follows a χ 2 (n, 0) distribution where n and 0 are the degree of freedom and the non-central parameter respectively. The upper bound of ILS pull-in region can be expressed as (Hassibi and Boyd, Reference Hassibi and Boyd1998):

(22) $$\overline {P_{s,ILS}} = P\left(\chi ^2 (n,0) \lt \left(\displaystyle{1 \over {\alpha _n \vert Q_{\hat a\hat a} \vert }} \right)^{{2 \over n}} \right)$$

Substituting ADOP into the equation, then the equation can be rewritten as (Teunissen, Reference Teunissen2000):

(23) $$\overline {P_{s,ILS}} = P(\chi ^2 (n,0) \lt \left(\displaystyle{{\Gamma (\displaystyle{n \over 2} + 1)^{{2 \over n}}} \over {\pi ADOP^2}} \right)$$

The upper bound based on the pull-in region approximation was examined and reported to be working well (Thomsen, Reference Thomsen2000; Verhagen, Reference Verhagen2003; Feng and Wang, Reference Feng and Wang2011).

The lower bound of the ILS success rate is sought by finding the inscribed ellipsoid of the ILS pull-in region. As discussed before, the boundaries of the ILS pull-in region are formed by half spaces perpendicular to the integer vector c = z 1 − z 2, z 1, z 2 ∈ ℤ n and passing the point $\displaystyle{1 \over 2}c$ . The minimum distance between two integer vectors $d_{min} = \min \Vert c \Vert _{Q_{\hat a\hat a}} $ can then be found. If the radius of the inscribed ellipses is $\displaystyle{1 \over 2}d_{min} $ , the lower bound of the ILS success rate can be given as (Hassibi and Boyd, Reference Hassibi and Boyd1998; Teunissen, Reference Teunissen1998b)

(24) $$\underline {P_{s,ILS}} = P(\chi ^2 (n,0) \lt \displaystyle{1 \over 4}d_{min} ^2 )$$

Figure 5. A two-dimensional example of ILS success rate upper and lower bound ellipsoidal integration region.

3.3.3. Upper and Lower Bounds Based on Eigenvalue

Teunissen (Reference Teunissen1998b; Reference Teunissen2000a; Reference Teunissen2000b) proposed a pair of upper and lower bounds of ILS success rate based on eigenvalues. Instead of approximating the ILS success rate by bounding the integration region, these bounds are based on the probability distribution approximation. Two positive definite matrices can be compared with their quadratic forms, if $f^T Q_1 f \ge f^T Q_2 f\forall f \in {\opf R}^n $ , then Q 1 ≥ Q 2. The $\hat a$ with the smaller vc-matrix always has the larger ILS success rate (Teunissen, Reference Teunissen2000a; Reference Teunissen2000b).

The idea of these bounds is quite similar to the ADOP. The volume of the vc-matrix can be defined by its eigenvalues:

(25) $$\left \vert {Q_{\hat a\hat a}} \right \vert = \prod\limits_{i = 1}^n {\lambda _i} $$

where λ = [λ 1, λ 2, · · · , λ n ] T is the eigenvalue vector of $Q_{\hat a\hat a} $ . The ADOP is the geometrical mean of the ambiguity standard deviation (Teunissen, Reference Teunissen1997; Odijk and Teunissen, Reference Odijk and Teunissen2008), so a group of upper and lower bounds of $Q_{\hat a\hat a} $ can be identified with its eigenvalues. Defining λ max  = max{λ i } and λ min  = min{λ i }, corresponding upper and lower bounds of $Q_{\hat a\hat a} $ can be constructed as Q 1 = λ max I n and Q 2 = λ min I n . In this study, an auxiliary matrix, Q 3 = ADOP 2 I n , is also constructed for comparison purposes. According to the previous analysis, $ \vert Q_3 \vert = \vert Q_{\hat a\hat a} \vert $ .

A two-dimensional example is presented in Figure 6 to show the relationship between $Q_{\hat a\hat a} $ , Q 1, Q 2 and Q 3. The three constructed matrices are identity matrices, so their confidence regions are circles in a two-dimensional case. The volume of Q 3 is the same as for $Q_{\hat a\hat a} $ ; the confidence ellipses of Q 1 and Q 2 are circumscribed and inscribed ellipses of $Q_{\hat a\hat a} $ 's confidence ellipse. The figure shows that Q 1 has poorer precision and Q 2 has better precision than $Q_{\hat a\hat a} $ . The ILS success rate of Q 1 and Q 2 can be calculated as:

(26) $$\eqalign{& P(\breve {a} _{ILS}^{Q_1} = a) = \left(2\Phi \left(\displaystyle{1 \over {2\sqrt {\lambda _{max}}}} \right) - 1\right)^n \cr & P(\breve {a} _{ILS}^{Q_2} = a) = \left(2\Phi \left(\displaystyle{1 \over {2\sqrt {\lambda _{min}}}} \right) - 1\right)^n} $$

Figure 6. A two-dimensional example of the confidence ellipse of $Q_{\hat a\hat a} $ and its upper and lower bound based on eigenvalue.

Then, the ILS success rate of $Q_{\hat a\hat a} $ can be bounded as:

(27) $$(2\Phi (\displaystyle{1 \over {2\sqrt {\lambda _{max}}}} ) - 1)^n \le P(\breve {a} _{ILS} = a) \le \left(2\Phi \left(\displaystyle{1 \over {2\sqrt {\lambda _{min}}}} \right) - 1\right)^n $$

The ILS success rate of Q 3 can be calculated with Equation (19). The ILS success rate of Q 3 is an upper bound of the integer bootstrapping success rate and it is an approximation of ILS success rate as well (Verhagen, Reference Verhagen2005; Verhagen and Teunissen, Reference Verhagen, Li and Teunissen2013).

3.3.4. Upper Bounds Based on Bounding Integration Region

Besides the elliptical integration region bounding, we still have other integration region bounding methods as discussed below.

Teunissen (Reference Teunissen1998a; Reference Teunissen1998b) proposed an upper bound of the ILS success rate based on reduction of the ILS pull-in region. The ILS pull-in region is bounded by the infinity planes orthogonal to the integer vector c. Actually, there are 2 n  − 1 pairs of valid bounding planes at the maximum, since one integer vector has only 2 n  − 1 pairs of adjacent integer vectors (Cassels, Reference Cassels2012). The definition of the ILS pull-in region (see Equation (6)) shows that the ILS pull-in region can also be interpreted as an overlap region of 2 n  − 1 bands centred at a with width $ \Vert c \Vert _{Q_{\hat a\hat a}} $ . If fewer bands are used to intersect a pull-in region, then a looser upper bound $U_a \supset S_a $ can be identified.

The ILS pull-in region can be written as:

(28) $$S_{ILS,0} = \left\{ x \in {\opf R}^n \vert \left \vert {\displaystyle{{c^T Q_{\hat a\hat a}^{ - 1} x} \over { \Vert c \Vert _{Q_{\hat a\hat a}} ^2}}} \right \vert \le \displaystyle{1 \over 2},\forall c \in {\rm {\opf Z}}^n \right\} $$

The left side of the inequality can be defined as:

(29) $$v_i = \displaystyle{{c_i^T Q_{\hat a\hat a}^{ - 1}} \over { \Vert {c_i} \Vert _{Q_{\hat a\hat a}} ^2}} \hat a$$

where c i  ∈ ℤ n . If q independent integer vectors are chosen, vector v can be defined as v = [v 1, v 2, · · · , v q ] T .

Applying the variance propagation law, the corresponding vc-matrix of v can be written as:

(30) $$\sigma _{v_i v_j} = \displaystyle{{c_i^T Q_{\hat a\hat a}^{ - 1} c_j} \over { \Vert {c_i} \Vert _{Q_{\hat a\hat a}} ^2 \Vert {c_j} \Vert _{Q_{\hat a\hat a}} ^2}} $$

The vc-matrix Q vv is a q × q symmetric matrix, so LDL T decomposition can be applied and the corresponding success rate can be calculated as:

(31) $$\overline {P_{s,ILS}} = \prod\limits_{i = 1}^q {\left(2\Phi \left(\displaystyle{1 \over {2\sigma _{v_{i \vert I} v_{i \vert I}}}} \right) - 1 \right)} $$

Conditional variance can be obtained by L T DL decomposition, which is similar to integer bootstrapping. The number of integer vectors q in practice can be given as n ≤ q ≤ 2 n  − 1 (Verhagen, Reference Verhagen2003). When q = 2 n  − 1, Equation (31) can be used to calculate the ILS success rate precisely, but 2 n  − 1 increases dramatically as the ambiguity dimension increases. Therefore, the ILS success rate remains difficult to calculate in high dimension cases. The upper bound would be closer to the true ILS success rate with a larger q. A two-dimensional example of bounding ILS success rate with band intersection method is shown in Figure 7. For a two-dimensional case, the ILS pull-in region is an intersection of three bands. For example, if q = 2, then the ILS pull-in region can be approximated by the intersection of two bands (the blue region).

Figure 7. A two-dimensional example of the bounding ILS pull-in region using the band intersection method.

A practical issue of the band intersection is the selection of the integer vector set v. $\forall v_i $ , v j  ∈ v, if i ≠ j, v i  ≠ λv j , where λ is an arbitrary non-zero real number. It shows any two vectors in the vector set v cannot be collinear. For the two-dimensional case, there are three pairs of adjacent integers at maximum for each integer and each pair of integers are collinear (e.g. [1, 0] T and [−1, 0] T in the figure are collinear). For this case, only one of them can be involved in the integer set v; thus the ILS pull-in region is an intersection of three bands, not six. Having the collinear integer vectors involved will cause duplicated integrations and will make the upper bound smaller than it should be.

4. NUMERICAL COMPARISONS

Different upper and lower bounds of the integer estimator success rate have been discussed, while the performance of these bounds is the issue truly of concern. Performance comparisons between these bounds have been extensively studied, such as Verhagen (Reference Verhagen2003), Feng and Wang (Reference Feng and Wang2011) and Verhagen and Teunissen, (Reference Verhagen, Li and Teunissen2013). However the performance of the success rate is still not fully understood. For example, the decorrelation procedure has been widely used in ambiguity estimation, but its impact on the success rate bounds has not yet been investigated. It is known that the ILS success rate is independent of the decorrelation procedure, but it does not mean that the bounds are decorrelation-invariant. In this section, the impact of the decorrelation procedure on the integer estimator success rate bounds is investigated.

4.1. Simulation Strategy

In order to examine the performance of the success rate bounds, a simulation-based comparison is carried out. The simulation scheme is briefly described in this section.

The medium baseline model is adopted in this simulation. The least-squares method is adopted to estimate the float solution, based on single epoch GPS observations. The elevation-dependent weighting strategy is used to capture the elevation-dependent observation noise and ionosphere noise, which is given as:

(32) $$w = (1 + 10e^{ - \textstyle{E \over {10}}} )^{ - \textstyle{1 \over 2}} $$

where w is the weight factor and E is the elevation angle in degrees.

In order to capture the satellite geometry impact, a 15° × 15° global-covering, evenly distributed ground tracking network is simulated. Considering the period of navigation satellites, we simulate 24 hours observation data from all monitor stations. The satellite geometry varies slowly, so the observation interval is set as 1800 seconds (half hour) to reduce computation loading. As a result, the data set including 12,600 epochs in total is used to describe the satellite geometry in different locations and at different times. We calculate $\hat a$ and its vc-matrix $Q_{\hat a\hat a} $ for each epoch in the data set and calculate the corresponding ILS success rate with the Monte Carlo method. For each $Q_{\hat a\hat a} $ in the data set, 100,000 samples following $N(0,Q_{\hat a\hat a} )$ are simulated to approximate its ILS success rate in the Monte Carlo method. More samples means more accurate computation precision in Monte Carlo simulation and a more detailed relationship between sample number and precision can be found in Verhagen and Teunissen, (Reference Verhagen, Li and Teunissen2013). In this study, we use 100,000 samples as a trade-off between precision and computational efficiency. Single frequency GPS data is simulated, with σ P,z  = 10 cm, σ φ,z  = 1 mm, σ I,z  = 1 cm where σ P,z , σ φ,z and σ I,z are the standard deviations of code, phase and virtual ionosphere observations on zenith direction.

4.2. Evaluation of IR and IB Success Rate

The performance of the integer rounding and integer bootstrapping success rate bounds are first investigated and compared. In this study, the success rate calculated from Monte Carlo simulation is used as the reference and 100,000 samples are used in each trial. In this case, the simulation error impact on success rate is typically smaller than 0.001 (Verhagen and Teunissen, Reference Verhagen, Li and Teunissen2013). The success rates and their bounds are compared before and after decorrelation.

The IR success rate and its bounds are presented in Figure 8. The two panels show the IR success rates calculated from the same samples before and after decorrelation. At first, the figure shows the decorrelation process significantly improves IR success rate. The maximum IR success rate is improved from about 0·4 to about 0·98 after decorrelation. The IB success rate serves as an upper bound of IR success rate. Their maximum difference is reduced from about 0·7 to 0·15 after decorrelation. The calculated IR success rate lower bound also becomes tighter after decorrelation. The maximum difference between simulated IR success rate and its lower bound is reduced from 0·2 to less than 0·05. Therefore the decorrelation improves IR success rate and makes its upper and lower bound tighter. After decorrelation, the IR lower bound is a tight lower bound of the simulated IR success rate.

Figure 8. Upper and lower bound of integer rounding success rate before and after decorrelation.

The impact of decorrelation on integer bootstrapping success rate is illustrated in Figure 9. The IB success rate also increased significantly after decorrelation. The minimum IB success rate is increased from about 0·1 to 0·4 after decorrelation. The discrepancy between the IB success rate and ADOP-based upper bound decreases after decorrelation. The maximum discrepancy decreases from 0·8 to about 0·2. In most cases, the discrepancy is smaller than 0·1 after decorrelation. However, the improvement is solely contributed by the IB success rate improvement since the ADOP is invariant during the decorrelation procedure (Teunissen, Reference Teunissen2003). In conclusion, decorrelation procedure can improve IB success rate significantly and the ADOP-based upper bound is a tight IB success rate after performing the decorrelation procedure. The results also indicates that the improvement of sorting strategy on IB success rate after decorrelation is limited since the IB success rate cannot be higher than ADOP based upper bound no matter which sorting strategy is applied.

Figure 9. Relationship between IB success rate and its ADOP-based upper bound before and after decorrelation.

4.3. Evaluation of ILS Success Rate

The ILS success rate is independent of the decorrelation procedure and this is why the decorrelation can be used to accelerate the ILS. However, the upper and lower bounds of the ILS success rate are not necessarily invariant during the decorrelation. In this study, three groups of upper and lower bounds and one approximation method are considered. These methodologies have already been discussed.

At first, the ellipsoidal upper and lower bounds are evaluated and the results are presented in Figure 10. The figure indicates the discrepancy between ILS success rate and its ellipsoidal upper bound is smaller than 0·2 in most cases. However, it cannot be called a tight upper bound since the discrepancy reaches 0·4 for some cases. The ellipsoidal lower bound performs even worse than the upper bound. The discrepancy is larger than 0·1 even in the best case. The figure also indicates that the ellipsoidal upper and lower bounds are invariant from the decorrelation and the result is reasonable. For the ellipsoidal upper bound, the success rate is invariant because the volume of $Q_{\hat a\hat a} $ does not change during the decorrelation procedure. The lower bound also does not change because $ \Vert {\hat a - \breve {a}} \Vert _{Q_{\hat a\hat a}} ^2 = \Vert {\hat z - \breve{z}} \Vert _{Q_{\hat a\hat a}} ^2 $ . Therefore, d min does not change during decorrelation either. In conclusion, ellipsoidal upper and lower bounds are not tight bounds of ILS success rate, but ellipsoidal upper and lower bounds are invariant during decorrelation.

Figure 10. Ellipsoidal upper and lower bound of ILS success rate before and after decorrelation.

The eigenvalue-based upper and lower bound is evaluated and the results are presented in Figure 11. This figure indicates that the eigenvalue-based ILS success rate bounds are also not tight. Before decorrelation, the upper bounds are around one and the lower bounds are around zero for the majority of cases. The eigenvalues of $Q_{\hat a\hat a} $ determine the axes of confidence ellipses. As shown in Figure 2, the shape of the confidence ellipse is changed during the decorrelation. Before decorrelation, the confidence ellipse is extremely elongated, so the eigenvalue-based upper and lower bounds are too rough to approximate the true ILS success rate. The performance of eigenvalue-based upper and lower bounds can be improved by decorrelation. The lower bound is significantly improved and the minimum discrepancy is reduced to about 0·2. However, they are still not tight enough even after decorrelation. Therefore, the eigenvalue-based upper and lower bound are not tight bounds of ILS success rate and they are not recommended in practice.

Figure 11. Eigenvalue-based upper and lower bound of ILS success rate before and after decorrelation.

Performance of the band intersection upper bound and IB lower bound is evaluated and the results are presented in Figure 12. In this study, the number of band q = n and n is the ambiguity dimension. As discussed, larger q means tighter bounds, but also means a heavier computation load. The figure shows the band intersection upper bound is a tight bound before decorrelation and the maximum discrepancy is about 0·2. After decorrelation, the success rate of band intersection upper bound increases, but this impact makes the band intersection method become a less tight upper bound of ILS success rate. The impact of decorrelation on band intersection method can be demonstrated by Figure 13. This figure shows the bounding region and ILS pull-in region are elongated in the high-correlation case. With a properly chosen band, the difference between the bounding region and the ILS pull-in region can be small. The decorrelation procedure gives the ILS pull-in region a better geometry, but also makes band intersection over-large. Since the PDFs of $\hat a$ around ILS pull-in region vertex are similar, a larger area difference also means larger success rate discrepancy (Wang, Reference Wang2015). On the other hand, the IB success rate also increases after decorrelation, while the increased success rate makes IB success rate become a tight lower bound of ILS success rate. The maximum difference between IB success rate and ILS success rate is decreased from 0·85 to less than 0·05. After decorrelation, the IB success rate is the tightest lower bound of the ILS success rate.

Figure 12. Band intersection upper bound and IB lower bound of ILS success rate before and after decorrelation.

Figure 13. Integration bounding region versus the ILS pull-in region in high and low correlation case.

The ADOP-based IB success rate upper bound can also be used as an approximation of ILS success rate. An examination of ADOP-based ILS success rate approximation is presented in Figure 14. As discussed, ADOP-based ILS success rate approximation is independent from the decorrelation procedure. For most cases, the discrepancy between the ADOP approximation and the ILS success rate varies between −0·1 and 0·2. The extreme discrepancies reach about 0·3.

Figure 14. ADOP-based ILS success rate approximation before and after decorrelation.

It can be seen that the success rate bounds may change when different decorrelation methods are applied. In this study, the integer Gaussian transformation method is used for decorrelation. However, the integer Gaussian transformation is not the only decorrelation method, so the performance of these success rate bounds with different decorrelation, e.g. LLL decorrelation, is still worth investigating.

5. CONCLUSIONS AND RECOMMENDATIONS

In our analysis, success rates of integer rounding, integer bootstrapping and integer least-squares and their bounds have been systematically studied. According to the numerical results, decorrelation has significant impact on success rate and its bounds. Key finds of this study are summarised as follows. Decorrelation procedure can improve IR and IB success rates substantially, but it cannot improve ILS success rate. Decorrelation can reduce the discrepancy between IR success rate and its lower bounds. After decorrelation, the lower bound of IR success rate becomes a tight lower bound of the true IR success rate. Decorrelation also increases IB success rate and the maximum improvement reaches 0·8. After decorrelation, the IB success rate is closer to its ADOP-based upper bound. The ellipsoidal upper and lower bound is invariant during the decorrelation, but they are not tight bounds of ILS success rate. Decorrelation procedure can improve the eigenvalue-based ILS bound, but it is still not tight enough after decorrelation. Decorrelation procedure degrades the band intersection upper bound and improves IB lower bound. After decorrelation, IB success rate is the tightest lower bound of ILS success rate. Band intersection upper bound performs best in ILS success rate upper bounds without decorrelation. ADOP-based success rate upper bound is an approximation of ILS success rate and its value does not change during decorrelation.

ACKNOWLEDGEMENT

This research is conducted with financial support from cooperative research centre for spatial information (CRC-SI) project 1·01 ‘New carrier phase processing strategies for achieving precise and reliable multi-satellite, multi-frequency GNSS/RNSS positioning in Australia’.

References

REFERENCES

Cassels, J.W.S. (2012). An introduction to the geometry of numbers, Springer Science & Business Media.Google Scholar
De Jonge, P. and Tiberius, C.C.J.M. (1996). The LAMBDA method for integer ambiguity estimation: implementation aspects. Publications of the Delft Computing Centre, LGR-Series.Google Scholar
Feng, Y. and Wang, J. (2011). Computed success rates of various carrier phase integer estimation solutions and their comparison with statistical success rates. Journal of Geodesy, 85, 93103.CrossRefGoogle Scholar
Feng, S., Ochieng, W., Samson, J., Tossaint, M., Hernández-Pajares, M., Juan, J. M., Sanz, J., Aragón-Àngel, À., Ramos-Bosch, P. and Jofre, M. (2012). Integrity monitoring for carrier phase ambiguities. Journal of Navigation, 65, 4158.CrossRefGoogle Scholar
Grafarend, E.W. (2000). Mixed integer-real valued adjustment (IRA) problems: GPS initial cycle ambiguity resolution by means of the LLL algorithm. GPS solutions, 4, 3144.CrossRefGoogle Scholar
Hassibi, A. and Boyd, S. (1998). Integer parameter estimation in linear models with applications to GPS. IEEE Transactions on Signal Processing, 46, 29382952.CrossRefGoogle Scholar
Jazaeri, S., Amiri-Simkooei, A.R. and Sharifi, M.A. (2012). Fast integer least-squares estimation for GNSS high-dimensional ambiguity resolution using lattice theory. Journal of Geodesy, 86, 123136.CrossRefGoogle Scholar
Jazaeri, S., Amiri-Simkooei, A.R. and Sharifi, M.A. (2014). On lattice reduction algorithms for solving weighted integer least squares problems: comparative study. GPS Solutions 18, 105114.CrossRefGoogle Scholar
Li, B. and Teunissen, P.J.G. (2011). High Dimensional Integer Ambiguity Resolution: A First Comparison between LAMBDA and Bernese. Journal of Navigation 64, S192S210.CrossRefGoogle Scholar
Li, L., Li, Z., Yuan, H., Wang, L. and Hou, Y. (2015). Integrity monitoring-based ratio test for GNSS integer ambiguity validation. GPS Solutions.Google Scholar
Odijk, D. and Teunissen, P.J.G. (2008). ADOP in closed form for a hierarchy of multi-frequency single-baseline GNSS models. Journal of Geodesy, 82, 473492.CrossRefGoogle Scholar
Strang, G. and Borre, K. (1997). Linear algebra, geodesy, and GPS. Wellesley CambridgeGoogle Scholar
Teunissen, P.J.G. (1993). Least-Square Estimation of the Integer GPS Ambiguities. Section IV, Theory and Methodology, IAG General Meeting. BeijingGoogle Scholar
Teunissen, P.J.G. (1995). The least-squares ambiguity decorrelation adjustment: a method for fast GPS integer ambiguity estimation. Journal of Geodesy, 70, 6582.CrossRefGoogle Scholar
Teunissen, P.J.G. (1997). A canonical theory for short GPS baselines. Part IV: Precision versus reliability. Journal of Geodesy, 71, 513525.CrossRefGoogle Scholar
Teunissen, P.J.G. (1998a). On the integer normal distribution of the GPS ambiguities. Artificial Satellites, 33, 4964.Google Scholar
Teunissen, P.J.G. (1998b). Success probability of integer GPS ambiguity rounding and bootstrapping. Journal of Geodesy, 72, 606612.CrossRefGoogle Scholar
Teunissen, P.J.G. (1999). An optimality property of the integer least-squares estimator. Journal of Geodesy, 73, 587593.CrossRefGoogle Scholar
Teunissen, P.J.G. (2000a). ADOP based upperbounds for the bootstrapped and the least-squares ambiguity success rates. Artificial Satellites, 35, 171179.Google Scholar
Teunissen, P.J.G. (2000b). The success rate and precision of GPS ambiguities. Journal of Geodesy, 74, 321326.CrossRefGoogle Scholar
Teunissen, P.J.G. (2001). GNSS ambiguity bootstrapping: Theory and application. Proceedings of International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation, 246–254.Google Scholar
Teunissen, P.J.G. (2003). An invariant upperbound for the GNSS bootstrappend ambiguity success rate. Journal of Global Positioning Systems, 2, 1317.CrossRefGoogle Scholar
Teunissen, P.J.G. and Odijk, D. (1997). Ambiguity dilution of precision: definition, properties and application. Proceedings of ION GPS-1997, 891–899.Google Scholar
Thomsen, H.E. (2000). Evaluation of upper and lower bounds on the success probability. Proceedings of ION GPS 2000, 183–188.Google Scholar
Verhagen, S. (2003). On the approximation of the integer least-squares success rate: which lower or upper bound to use. Journal of Global Positioning Systems, 2, 117124.CrossRefGoogle Scholar
Verhagen, S. (2005). The GNSS integer ambiguities: estimation and validation. Ph.D Thesis, Delft University of Technology.CrossRefGoogle Scholar
Verhagen, S., Li, B. and Teunissen, P.J.G. (2013). Ps-LAMBDA: Ambiguity success rate evaluation software for interferometric applications. Computers & Geosciences, 54, 361376.CrossRefGoogle Scholar
Wang, L. (2015). Reliability control in GNSS carrier-phase integer ambiguity resolution. Queensland University of Technology.Google Scholar
Xu, P. (2001). Random simulation and GPS decorrelation. Journal of Geodesy, 75, 408423.CrossRefGoogle Scholar
Xu, P., Cannon, M.E. and Lachapelle, G. (1995). Mixed integer programming for the resolution of GPS carrier phase ambiguities. IUGG95 Assembly, 1–12.Google Scholar
Xu, P., Shi, C. and Liu, J. (2012). Integer estimation methods for GPS ambiguity resolution: an applications oriented review and improvement. Survey Review, 44, 5971.CrossRefGoogle Scholar
Figure 0

Figure 1. A demonstration of integer estimator pull-in region in a two-dimensional case; the presented pull-in regions are the integer rounding (left), the integer bootstrapping (centre) and the integer least-squares (right).

Figure 1

Figure 2. A two-dimensional example of ambiguity decorrelation with the integer Gaussian transformation method.

Figure 2

Figure 3. Two-dimensional example of integer rounding success rate calculated with full vc-matrix $Q_{\hat a\hat a} $ (left) and marginal vc-matrix $Q'_{\hat a\hat a} $ (right).

Figure 3

Figure 4. Two-dimensional example of integer bootstrapping success rate. The left panel shows $f_{\hat a} (x - a)$ and corresponding IB pull-in region. The right panel shows $f_{\hat a'} (x - a)$ and corresponding IB pull-in region.

Figure 4

Figure 5. A two-dimensional example of ILS success rate upper and lower bound ellipsoidal integration region.

Figure 5

Figure 6. A two-dimensional example of the confidence ellipse of $Q_{\hat a\hat a} $ and its upper and lower bound based on eigenvalue.

Figure 6

Figure 7. A two-dimensional example of the bounding ILS pull-in region using the band intersection method.

Figure 7

Figure 8. Upper and lower bound of integer rounding success rate before and after decorrelation.

Figure 8

Figure 9. Relationship between IB success rate and its ADOP-based upper bound before and after decorrelation.

Figure 9

Figure 10. Ellipsoidal upper and lower bound of ILS success rate before and after decorrelation.

Figure 10

Figure 11. Eigenvalue-based upper and lower bound of ILS success rate before and after decorrelation.

Figure 11

Figure 12. Band intersection upper bound and IB lower bound of ILS success rate before and after decorrelation.

Figure 12

Figure 13. Integration bounding region versus the ILS pull-in region in high and low correlation case.

Figure 13

Figure 14. ADOP-based ILS success rate approximation before and after decorrelation.