1. INTRODUCTION
High accuracy positioning with Global Navigation Satellite Systems (GNSS), requires the use of carrier phase measurements. These measurements are used in different ways, including the conventional approach for dynamic positioning referred to as Real Time Kinematic (RTK) and the more recent Precise Point Positioning (PPP) (Wang and Gao, Reference Wang and Gao2006; Laurichesse and Mercier, Reference Laurichesse and Mercier2007). The former employs at least two receivers operating simultaneously while the latter uses a single receiver. PPP has the potential advantage over the conventional method of being less expensive, thereby enabling widespread application, particularly in remote and developing parts of the world. Carrier phase measurements can be used directly as observations (e.g. in single frequency PPP) or through the derivation of observables based on the raw measurements (e.g. the double differenced observables in conventional RTK). In both cases, the determination of the correct corresponding integer number of carrier cycles (integer ambiguity) is the key to high accuracy positioning.
There are a number of methods currently used to resolve ambiguities. The basic procedure is to estimate the float ambiguities (as a result of the effect of residual errors) and then to determine the integer values through, for example, rounding or executing a search process. The well known Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) method is a combination of least-squares and a transformation to reduce the search space (Teunissen, Reference Teunissen1993). However, the techniques currently used to determine the level of confidence (integrity) of the ambiguities (i.e. ambiguity validation) have a number of weaknesses. These approaches usually involve the construction of test statistics, characterisation of their distribution and definition of thresholds. Examples of these tests include ratio, F-distribution, t-distribution and Chi-square distribution. It has been shown that none of these tests is based on a sound theoretical basis (Verhagen, Reference Verhagen2004), and that there is no single method that can be used in all situations. Specifically, the conventional ratio test uses the ratio between the second best and best ambiguity candidates as the test statistic and adopts a fixed threshold. However, the use of a fixed threshold does not capture the major factors that impact the level of confidence (or success rate) associated with the resolved ambiguities. An alternative is to use Monte Carlo simulation as discussed below.
The Monte Carlo simulation approach is adopted in the Integer Aperture (IA) method for ambiguity validation (Verhagen, Reference Verhagen2004; Teunissen and Verhagen Reference Teunissen and Verhagen2009a). The IA method defines a region of acceptable ambiguities. Note that the conventional ratio test has been shown theoretically to be the IA because its reciprocal reflects the rate of success/failure of ambiguity resolution (Teunissen and Verhagen, Reference Teunissen and Verhagen2004, Reference Teunissen and Verhagen2009b). However, instead of using the fixed threshold (as in the conventional ratio test) Monte Carlo simulation is used to determine the success/failure rate for each reciprocal of the conventional ratio at the current epoch. This requires the simulation of a large number of normally distributed independent samples of float ambiguities. However, the assumption of independent normally distributed float ambiguities is difficult to justify. Furthermore, the need for significant computational resources for the online simulation of large samples (>100,000) and computation of the rates precludes the use of IA in real time.
This paper addresses the weaknesses above and proposes a new approach based on the distribution of the conventional ratio and numerical computation to calculate the confidence level of the best set of ambiguity candidates. The algorithm takes information from least-squares ambiguity resolution algorithms including LAMBDA to carry out the ratio (of the ambiguity residuals between second best and best set of candidates) test. The test statistic is the same as that used in the conventional ratio test. However, the distribution to determine the confidence level is described by a doubly non-central F distribution. Furthermore, a numerical algorithm is used for the real time computation of the threshold (i.e. confidence level) based on the new distribution.
It should be noted that the derivation of the confidence level is effectively monitoring the integrity of the best set of ambiguity candidates. Therefore, the overall approach adopted for integrity monitoring should have two main steps: ambiguity resolution and positioning stages. This two-step integrity monitoring approach has the benefit of providing two levels of protection to the user.
The next section gives the background of traditional ambiguity resolution and validation, and defines the notation used. This is followed by the derivation of the distribution of the conventional ratio and specification of the algorithms for the calculation of the confidence level. The new test is then applied to PPP, and the results and relevant discussions presented before the paper is concluded.
2. AMBIGUITY RESOLUTION AND VALIDATION
2.1. Ambiguity Resolution
Carrier phase ambiguity resolution is the key to high accuracy positioning with GNSS. Reliable ambiguity resolution is a function of several factors, the main ones being type of measurement, residual measurement errors, geometry and algorithm formulation. To have a good chance of reliable ambiguity resolution, the overall error budget in the observation is generally required to be at the half a cycle level with an uncertainty of less than a quarter of a cycle (Sauer, Reference Sauer2003). This requirement can be achieved through the use of:
• Linear combinations of raw measurements to form longer wavelengths to aid the resolution of ambiguities for shorter wavelengths (e.g. the Three-Carrier Ambiguity Resolution – TCAR).
• Single/double differencing to remove common errors and mitigate correlated errors.
• External products to mitigate errors (e.g. satellite orbits and clocks provided by the International GNSS Service [IGS]).
• Other error corrections from dedicated networks (e.g. Un-calibrated Phase Delays – UPD).
• A combination of the above.
After pre-processing, the model used for GNSS positioning is:
where y is the observation vector, a and b are unknown parameter vectors, e is the noise vector, a∈Zn is the integer ambiguity, contains the other parameters to be solved (e.g. position), and .
The estimation of integer values of a is not straightforward. The first step in the Integer Least Squares (ILS) method (Teunissen, Reference Teunissen1993) is to use the traditional least squares approach to estimate a as real (float) values, â. Denoting the corresponding solution of b as , the vector of observation residuals can be written as:
The Sum of the Squared Error (SSE) is given by:
The matrix G y is the cofactor matrix of the variance-covariance matrix Q y.
If ê∈N(0, σ 2), then
The next step is to map the ambiguity from an n-dimensional real space to n-dimensional integer space. There are various methods for the mapping, with the simplest being rounding to the nearest integer. When a number of carrier phase observations are involved, a float vector of ambiguities can be used as the initial vector. The corresponding integer vector can then be obtained by rounding each element of the float vector to its nearest integer. This method has the disadvantage that it does not take into account any of the correlation that may exist between the individual elements of the ambiguity vector. In this sense, this simple method can be only safely used if the model is improved by using additional information such as precise ionospheric corrections (Hernández-Pajares et al., Reference Hernández-Pajares, Juan, Sanz, Aragón, Ramos, Samson, Tossaint, Albertazzi, Odijk, Teunissen, de Bakker, Verhagen and van der Marel2010).
Another relatively easy way to obtain an integer ambiguity vector from the float ambiguity vector is to apply a sequential rounding scheme to the elements of the latter. This approach uses rounding to determine the integer ambiguity for the first element of the float vector, which typically has the smallest estimated error, and therefore, the best chance for the correct integer ambiguity. The remaining ambiguities are then sequentially rounded to the nearest integers after taking into account their correlations with the others that have been resolved. This is referred to as integer bootstrapping. The advantage of this method is its simplicity. However, its results depend on the parameterisation of the ambiguities. This implies that results will differ when applying this estimator to either original or decorrelated ambiguities.
Another approach for estimating an integer ambiguity vector using the integer least squares estimator is by solving the following minimization problem (Teunissen, Reference Teunissen1993):
This estimator is optimal in the sense that it gives the highest probability of finding the correct integer vector. The estimator can be interpreted as finding the integer vector that has the shortest distance to the float solution, measured in the metric of the variance-covariance matrix Q â. This method does not have the disadvantages of rounding and bootstrapping. However, it is more complex. Moreover, a solution using standard least-squares algorithms cannot be found because of the integer nature of the solution. A discrete search through the complete space of integers Z n may be required in order to obtain optimal results. The different elements of the float ambiguity vector may be highly correlated. Hence, for methods such as rounding and bootstrapping, the more correlated the ambiguities are, the more likely that these methods will yield non-optimal results.
A brute force search is a time consuming procedure and may not be acceptable for some applications including those where time is critical or where cost is a concern. A procedure to reduce the search space developed by Teunissen (Reference Teunissen1993), namely decorrelation of the ambiguities, is widely accepted by the GNSS community. Based on the least squares ambiguity estimation, a transformation of the ambiguities and corresponding variance-covariance matrix into an equivalent but less correlated set of ambiguities is carried out before estimating the ambiguities as integers. The so-called Z transformation is expressed as:
Z is a matrix with all its elements as integers, and the inverse of the Z-matrix should exist. The Z transformation should provide maximum decorrelation of the ambiguities.
The more recent Integer Aperture (IA) estimation method defines the size and shape of the aperture pull-in regions (Teunissen, Reference Teunissen2003, Reference Teunissen2005). In classical hypothesis testing theory, the size of an acceptance region is determined by the choice of the testing parameters: the false alarm rate and missed detection probability (or detection power). However, in the case of integer ambiguity resolution, the selection of these parameters is not obvious. It is especially important that the probability of incorrect ambiguity fixing is small. Therefore, the concept of IA estimation with a fixed fail rate has been introduced. This means that the size of the aperture space is determined by the condition that the fail rate is either equal to or lower than a fixed value. At the same time, the shape of the aperture pull-in regions should preferably be chosen such that the success rate is still as high as possible.
After the process of optimisation employing the techniques above, the estimate of a as integer values is denoted as and the corresponding solution of b is denoted as . The vector of observation residuals can then be written as , where i is the order of a candidate ambiguity. The number of the candidates depends on the search space. However, only the best and second best set of ambiguities are considered for validation. Therefore, 0<i≤2. This is explained further in the next section.
The Sum of the Squared Error (SSE), , can then be expressed as:
If , then
After the determination of the ambiguity candidates, the ambiguity residuals are determined as:
The SSE, R i, is expressed as:
and,
where δ i is the non-central parameter of Chi-square distribution.
2.2. Validation of the Ambiguities
Each integer estimates of the ambiguities contain a degree of uncertainty for both integer rounding (e.g. TCAR) and least squares methods (e.g. LAMBDA). Therefore, the resolved ambiguities must be validated before they can be used for high accuracy positioning. In general, validation is based on the formation of a test statistic and the determination (or assumption) of its distribution. This distribution should in theory enable the threshold or confidence level to be computed for comparison with the test statistic, in order to determine if the integer ambiguity is acceptable. There are a number of tests based on the SSE of either the observation or ambiguity residuals (the latter is a subset of the former).
2.2.1. Validation Based on Observation Residuals
The ratio test (also referred to as the F-ratio test) constructed based on the observation (post-fit) residuals between the second best and best is
where, k is the threshold. The best and second best can also be tested separately as (Chen, Reference Chen1997):
where χ γ2(m−p, 0) is the critical value corresponding to the central Chi-square distribution with a level of significance γ (probability in the right-hand tail); χ β2(m−p, δ) is the critical value corresponding to the non-central Chi-square distribution with a probability β and non-centrality parameter:
The difficulty in using this test is the choice of the critical values of γ and β.
As an alternative, the W-ratio test was proposed by Wang et al. (Reference Wang, Stewart and Tsakiri2000):
where and var(d)=δ 2Q d.
Q d is the variance co-factor of d and δ 2 is the so-called variance factor. However, this approach makes the incorrect assumption that the use of a posteriori variance cofactor translates into the W-ratio with a Student's t distribution (Verhagen, Reference Verhagen2004).
2.2.2. Validation Based on Ambiguity Residuals
Similar to the F-ratio test, Euler and Schaffrin (Reference Euler and Schaffrin1991) proposed a test given by:
where, k is the threshold. Notably, a number of constant values for k have been used without a credible theoretical or practical justification. For example, the values of 1·5 and 3 have been used by Wei and Schwarz (Reference Wei and Schwarz1995), and Leick (Reference Leick2003) respectively.
Another approach is based on the difference of the quadratic forms R 1 and R 2 (Tiberius and De Jonge, Reference Tiberius and De1995). Firstly, the test is carried out. When this is passed, the next step is to perform the difference test:
The use of tests based on either ratio or difference involving the best and second best ambiguity vector, enables all cases to be covered, as they represent the smallest values (i.e. deviations). However, the difficulty in using this test lies in the choice of the critical value, k which is determined empirically.
A class of Integer Aperture (IA) estimation and validation methods is proposed by Teunissen (Reference Teunissen2003, Reference Teunissen2005). It is very important to note that IA estimation with a fixed fail rate is a method that involves both integer estimation and validation, and allows for an exact and overall probabilistic evaluation of the solution. With the traditional approaches (e.g. the conventional ratio test applied with a fixed critical value or threshold) an overall probabilistic evaluation of solution is not possible.
If the float ambiguity lies in one of the acceptance regions, the corresponding resolved integer solution is accepted. The size of the regions depends on the choice of the threshold value μ (i.e. the larger it is, the larger the acceptance region becomes). In the limiting case, μ=1, the acceptance regions are equal to the integer least-squares pull-in regions, and hence, the integer solution is always accepted. In the other limiting case, μ=0, the integer solution is always rejected.
IA validation is based on a fixed success/fail rate. The method uses the reciprocal of expression (19) as the test statistic. It involves the derivation of the reciprocal of expression (19), μ 0 with measurements at the current epoch, and using Monte Carlo simulation to generate float ambiguities and corresponding reciprocals of expression (19), μ i (Teunissen and Verhagen, Reference Teunissen and Verhagen2004). Therefore, a statistic for the success/fail rate can be calculated using μ 0 and μ i together with other parameters. In order to obtain a good approximation, a Monte Carlo simulation with a large number of samples is required (typically >100,000). This is a major limitation for this method for real time applications.
From the preceding sections, it is clear that ambiguity validation is an open problem. This is because none of the available integer validation test statistics is based on sound theoretical principles. Furthermore, there is no single test that can be used in all situations (Verhagen, Reference Verhagen2004). A common problem with the observation and ambiguity residuals based tests above, is the choice of the threshold. Either an empirically determined value is used, or an assumption is made on the distribution. Although the IA estimator based validation is a promising approach, it has some weaknesses in practice.
3. NEW APPROACH TO CARRIER PHASE BASED INTEGRITY MONITORING
In order to detect failures early, the approach to carrier phase based integrity monitoring proposed in this paper involves two steps executed at the ambiguity resolution and positioning stages (the latter comprising the detection function in the measurement domain and the use of the protection levels in the position domain). This section addresses the details of the integrity monitoring at the ambiguity resolution stage. The second stage is based on the Carrier phase RAIM (CRAIM) developed for the conventional RTK (Feng et al., Reference Feng, Ochieng, Moore, Hill and Hide2009). This is easily transferable to other positioning concepts including PPP. This is elaborated in the next section.
The ambiguity validation process can be considered as a failure detection process in the ambiguity domain. Four stages are normally involved in the process: the construction of test statistics, description of the distribution of the test statistics, determination of threshold, and determination of integrity flag. The test statistics formulated are functions of a number of factors including geometry, observations, observation residual errors after pre-processing, and noise. The threshold should reflect the correctness/confidence level (or success rate) of the test in order that decisions are made based on it.
As discussed in the preceding section, ratio based testing is commonly used for ambiguity validation. The test statistic usually takes the form of either expression (19) or its reciprocal and a fixed threshold. Furthermore, the fixed success/fail rate based validation approach employed with the IA method has limitations associated with real-time applications. Therefore, the remaining major issue is the determination of threshold or confidence level which is based on an accurate description of the distribution of the test statistic to be used instead of the fixed thresholds used currently.
The approach in this paper uses the same test statistic as that used in conventional ratio test (expression 19), but derives a distribution for the test statistic (discussed below) and uses it within a numerical algorithm to compute the confidence level.
In order to determine the confidence level of the ratio test, the distribution of (R 2/R 1) is required. Thus expression (19) can be rewritten as:
As can be seen from expression (14), both the numerator and denominator follow a non-central χ 2 distribution. Therefore, if R 1 and R 2 are independent (assumed in this paper) then (R 2/R 1) has a doubly non-central F-distribution (Bulgren, Reference Bulgren1971).
Therefore, the confidence level p c of (R 2/R 1) can be derived from:
where n is the number of ambiguities; δ 1 and δ 2 can be determined using the traditional failure detection scheme with a probability of false alert (PFA) and a probability of missed detection (PMD).
In expression (14), there is one PFA and two probabilities of missed detection (PMD1 and PMD2) for the best and second best SSE, R i(i=1, 2). δ 1 and δ 2 can then be derived from the following expression.
The analytical method for the solution of δ 1 and δ 2 is not straightforward, requiring the application of numerical methods using series representation. This is also used to determine the confidence level in expression (23). The CDF of the general expression (23) can be re-written as (Bulgren, Reference Bulgren1971)
where
• and are the probabilities of Poisson distribution, Γ( ) is the Gamma function,
• is the CDF of Beta distribution with u=n 2x/(n 2x+n 1) and x⩾0, is the Beta function with a>0 and b>0.
For a feasible implementation of the algorithm, the two infinite series are truncated when the higher order is not needed in reaching a specified accuracy.
Following the execution of LAMBA type algorithms, the method here takes the outputs of carrier phase ambiguity resolution (R 1, R 2 and n) together with pre-defined PFA, and PMDs, to generate in an online computation, the confidence levels for the candidate ambiguities at each epoch. Given a threshold, the confidence levels can be used to accept or reject the best set of candidate ambiguities. Some users may be more concerned with the confidence level satisfying the requirements rather than the confidence level itself. In this case, in order to reduce the processing resources consumed by online computation, it is quite possible to carry out offline calculation for the threshold of (R 2/R 1) to reflect the required confidence. Therefore, a look-up table can be calculated offline taking all possible cases including selecting a range of confidence levels and degrees-of-freedom. This is especially useful for embedded systems where computational power is low.
4. APPLICATION OF NEW APPROACH TO PPP
Crucial to PPP is the requirement to deal with all the error sources including those that are either eliminated or reduced by differenced observables in conventional RTK. Therefore, the PPP user algorithm is designed, in this paper, to include specific error models including Earth tides and wind up. The algorithm is based on sequential least squares that exploits the current and previous data. Figure 1 shows a high level architecture of the PPP user algorithms that employs the two-step integrity monitoring process (ambiguity resolution and positioning stages) highlighted in yellow and green respectively. The first stage employs the new validation algorithms presented in the previous section. The second stage is elaborated in this section as it applies to PPP.
In a filtering based carrier phase positioning, the ambiguity vector is considered as unknown in the state equation. If the ambiguity vector is resolved correctly, the integer ambiguity vector therefore substitutes the float values in the state. The ambiguity resolution and validation can be carried out separately with the input from the filter. Therefore, the integrity monitoring method proposed and presented in the previous section is executed at this stage.
For the integrity monitoring at the positioning stage, the conventional RTK based method (CRAIM) can be migrated to PPP. The information needed include those items that can be extracted from the PPP Kalman filter, given by product providers and agreed values by users on requirements and the other residual errors. The outputs from the integrity monitoring algorithm are the protection levels and integrity status.
The information that can be extracted from the PPP Kalman filter are (Feng et al., Reference Feng, Ochieng, Moore, Hill and Hide2009):
• The innovation vector r
• The weight matrix W
• The design matrix H
• The measurement noise matrix R
• The gain K
• The covariance matrix P
The information given by product providers are clock and orbit corrections for each satellite and associated residual uncertainty σ i,SIS, and corrections for the ionospheric error and the associated residual uncertainty σ i,IONO. If the residual uncertainty information is not available, a value agreed by users which overbounds these residual errors can be used. In addition, σ i,User is the standard deviation of residual errors for each measurement after applying appropriate models and includes residual multipath, and receiver noise. The values of σ i,User which overbound these residuals have to be agreed by users. Furthermore, the values of the parameters of the Required Navigation Performance (RNP) have to be provided by users, especially the integrity and continuity parameters used in this part.
In PPP (Figure 1), the observables used include ionospheric free pesudorange (PC), ionospheric free carrier phase (LC), Melbourne-Wubbena (LW-Pn), and geometry free linear combinations (LI). These observables reduce the impact of bias type error on PPP performance, although the noise levels are amplified. The standard deviation of each observable is derived from error propagation theory (σ obs=f (σ obs,user, σSIS, σobs, IONO)).
The methods for determining the test statistics, thresholds and protection levels are similar to those used in CRAIM (Feng et al., Reference Feng, Ochieng, Moore, Hill and Hide2009). The construction of the test statistics consisting of one full set and a number of subsets is based on the parameters extracted from the PPP Kalman filter. The test statistics derived from the innovation of a Kalman filter follow a normalized Chi distribution with degrees of freedom equal to the number of measurements used. Whether it is a central or non-central Chi-squared distribution depends on the absence or presence of a range bias error (Diesel, Reference Diesel and Luu1995; Lee, Reference Lee and Laughlin1999; Feng et al., Reference Feng, Ochieng, Walsh and Ioannides2006). Therefore, the corresponding thresholds can be determined from the Chi distribution for a given probability of false alert. To detect the presence of failure, the test statistics are compared to corresponding thresholds. If there is a failure, either an integrity flag is raised or a failure exclusion process is invoked. It should be noted that the innovation sequence contains information obtained from the previous states. Therefore, the response to failure will be delayed for a few epochs and the same for positioning accuracy.
There are two sets of protection levels determined in the PPP integrity monitoring algorithm. One set is determined by using the horizontal and vertical uncertainty in the positioning solution which can be extracted from the covariance matrix of the PPP Kalman filter. However, the horizontal and vertical uncertainties do not immediately reflect the cases when swapping between float and fixed ambiguities. This is because of sudden changes in the positioning accuracy. A few epochs may be required for the covariance matrix to adjust. This needs to be considered in the overall protection level determination. The other set of protection levels is based on the slope concept. Similar to conventional RAIM, the second set of protection levels has little relation to the residuals. In the case that two sets of protection levels are available, the larger ones are adopted. Both the horizontal and vertical protection levels are used to compare against alert limits specified by the user. In safety critical applications, if the protection level is larger than the alert limit, a warning (alert) should be issued to let the user know of the integrity status. However, in availability critical applications, the alert limit may be relaxed so that the user can continue to use the system without being interrupted by an integrity alert.
5. RESULTS
5.1. Data Simulation
In order to test this approach, simulated GNSS data gathered in the context of the European Space Agency (ESA) funded project on Enhanced PPP (E-PPP), were used. The objective of this project was to assess new approaches to PPP (undifferenced ambiguity fixing, use of precise ionospheric delay corrections, GNSS three frequency measurements, etc.) by analysing such simulated datasets.
The data simulation was carried at the ESA's European Space Research and Technology Centre (ESTEC) European Navigation Laboratory (ENL) in Noordwijk in the Netherlands. The Spirent GNSS signal simulator was used to generate GPS data to quantify the performance of the proposed integrity monitoring algorithms. The data were simulated for seven stations (Figure 2). The data from the NPLD station in Teddington, UK, were used for positioning, and the rest to generate the products for PPP.
5.2. Sample Results
Based on the hardware setup, a number of scenarios were simulated. The following abbreviations are used to represent various scenarios:
• G: GPS data used.
• ni: Iono corrections not considered.
• I: Iono corrections used.
• f1: Constrained ambiguities.
• fw: Wide lane ambiguities constrained and L1 ambiguities fixed with LAMBDA.
The combination of the above results in a large number of scenarios. For example, the G_I_fw represents a scenario in which ionospheric corrections are applied to GPS data, wide lane ambiguities are constrainedFootnote 1 and L1 ambiguities fixed with LAMBDA. The results of the following scenarios are selected to demonstrate the proposed method.
• Comparison, in terms of the correctness of the protection levels (Horizontal Protection Level [HPL] in this case), of the scenario where ionospheric corrections from the CPF are applied with the scenario where ionospheric corrections are not applied.
• Determination of the correctness of the ambiguities resolved for the cases with and without ionospheric correction from CPF.
• Sensitivity of the algorithm to a one cycle bias applied to L1 measurements of one GPS satellite.
The correctness test used to compare the resolved ambiguities to the corresponding true values is based on the difference of their residuals. This is expressed as:
where is the vector of resolved ambiguities and is the vector of true ambiguities. If the resolved ambiguities are correct, then T Ambiguity=0; If any resolved ambiguity is wrong then, T Ambiguity>0. The test is used to demonstrate the validity of the algorithm proposed.
Figure 3 shows the sensitivity of the HPL to ionospheric error corrections from the CPF. The error of the ionospheric corrections ranges from 0·01 to 6·7 cm in absolute value with a mean of 1·4 cm. It can be seen from Figure 3 that the HPLs for the two cases (with and without ionospheric error corrections) are always larger than the corresponding Horizontal Position Errors (HPEs) during the positioning period. This demonstrates that the HPLs overbound the HPEs. Furthermore, as expected, the HPL for the case without ionospheric error corrections is consistently higher than in the case with ionospheric corrections. The sudden reduction of both HPE and HPL for G_I_fw at the 300th epoch was due to the ambiguities being constrained. These findings provide a level of confidence in the algorithm for the computation of the HPL.
Figure 4 and Figure 5 each show the ratio of residuals from the best and second best sets of ambiguities, the number of ambiguities and the confidence level of the best set of ambiguities for the scenarios with and without ionosphere corrections. The key parameter for ambiguity validation, the confidence level, is calculated by using doubly non-central F distributions. The wide lane ambiguities are constrained while the L1 ambiguities are fixed with LAMBDA.
From the results in Figures 4 and 5 above, it can be seen that the impact of the ionospheric error corrections is a higher confidence level in the ambiguities, compared to the case without ionospheric error corrections. It is also noteworthy that the confidence level does not change with the number of ambiguities in a deterministic way. The confidence level decreases in Figure 4, but increases in Figure 5 at about epoch 1200 in response to a change in the number of ambiguities. This is expected since the confidence level is a function of a number of factors.
Figure 6 shows the test (T Ambiguity) for the comparison of resolved ambiguities and the true values for the G_I_fw scenario. Figure 7 shows the test (T Ambiguity) for the comparison of resolved ambiguities and the true values for the G_ni_fw scenario.
From the results in Figure 6, it can be seen that the ambiguities resolved are correct for the G_I_fw scenario and the confidence level is quite high. From the results in Figure 7, it can be seen that only a few ambiguities (at around the 1200th epoch) are resolved correctly for the G_ni_fw scenario, albeit with a relatively low confidence level. It is interesting to note from Figures 5 and 7, that at around the 700th epoch, the ratio threshold of 1·5 (used by Wei and Schwarz, Reference Wei and Schwarz1995), results in incorrect ambiguities. This corroborates the earlier observation that it is unwise to adopt a fixed threshold.
Figure 8 shows the sensitivity results following an injection of a step type failure of a bias of one wavelength to L1 measurements from PRN3 starting from the 1st to 900th epoch of a 3-hour long dataset sampled at 1 Hz. Because of the existence of the bias from the first epoch, the integrity algorithm does not detect it during the first 900 epochs as it is absorbed into the ambiguity. This is expected and confirms the performance of the ambiguity resolution function. However, from the 901st epoch onwards (i.e. in the absence of the bias), there is a change in the ambiguity for PRN3 of one cycle resulting in a sudden jump in both the full set and subset test statistics (Figure 8). Both are well above their thresholds and trigger the generation of an integrity flag. The integrity algorithm exhibits a remarkable sensitivity to this bias by a prompt and reliable detection.
Figure 9 captures the sensitivity of the ambiguity validation to a step type error. The ratio (second best dividedly the best), the number of satellites used and the confidence level of the ambiguities are given in the figure. It can be seen that the confidence levels of the ambiguities are very high from the start. However, there is a lag when one ambiguity is wrong from the 901st epoch.
The results shown in Figures 8 and 9 suggest that the ambiguities need to be re-resolved either at each epoch or immediately when an integrity flag is raised in order to minimise the impact of bias type failure. This was envisaged in the interfacing of the positioning and integrity functions of the PPP software architecture.
6. CONCLUSIONS
This paper has derived the doubly non-central F distribution for the ratio test. This has enabled the calculation of the confidence level for the best set of ambiguity candidates to provide a level of integrity monitoring for ambiguities. The results for PPP with simulated data demonstrate both the power and efficiency of the proposed method in monitoring both the integrity of the ambiguity computation and position solution processes. The technique has the important benefit of facilitating early detection of any potential threat to the position solution, originating in the ambiguity space, while at the same time giving overall protection in the position domain based on the required navigation performance.
Furthermore, due to the fact that the method only requires information from least squares based ambiguity resolution algorithms, it is easily applied to conventional RTK positioning. Future work will consider extensive tests with real data and the quantification of the integrity risk achievable in real environments.
ACKNOWLEDGEMENTS
The authors would like to thank Simon Johns at the European Navigation Laboratory (ENL), ESTEC ESA for helping to carry out the simulation test during the last week of August 2009. This work was partially funded by ESA in the context of the EPPP project.