1. INTRODUCTION
Dedicated Short Range Communication (DSRC) is a 75 MHz communication medium between 5·85 GHz and 5·925 GHz. DSRC standards and technology form a platform for vehicle-to-vehicle and vehicle-to-infrastructure communications. This communication platform will enable Vehicular Adhoc NETworks (VANET) through which many Intelligent Transport Systems (ITS) applications can become realities, including safety-related applications, driver assistance, and traffic assignment (traffic control) systems. Safety applications transmit information about unforeseeable (by the driver) and potentially dangerous collisions or events (Boukerche et al., Reference Boukerche, Oliveira, Nakamura and Loureiro2008). Driver assistance systems include speed management for avoiding traffic jams, automated highway entering or coordination of the arrival times at an intersection (Boukerche et al., Reference Boukerche, Oliveira, Nakamura and Loureiro2008). All of the DSRC based safety applications require continual position estimation and for the most important applications, such as collision avoidance, the position estimation frequency must be 10 Hz with an accuracy of no less than 50 cm (Vander et al., Reference Vander, Shladover and Miller2004). Yet the issue that is often neglected is that single frequency Global Positioning System (GPS) receivers that are used for vehicular navigation are position estimation engines with 10 m accuracy at 1 Hz frequency. This means that the positioning requirements of safety applications are not met in urban areas by conventional Global Navigation Satellite Systems (GNSS). In addition to this accuracy issue, due to satellite blockages, GPS other GNSS systems are, in general, not reliable in the deep urban canyons of cities with high-rise buildings. The likelihood of a position solution outage is high near these buildings, especially in city centres. Besides the satellite visibility problem, the multipath effect (reflection of satellite signal by obstacles such as buildings) can deteriorate position accuracy. In Efatmaneshnik et al. (Reference Efatmaneshnik, Kealy, Lim and Dempster2010a) we showed via Cramer Rao Bound Analysis that Cooperative Positioning (CP) is a most successful systemic solution for this issue.
Cooperative Network Positioning, or simply CP, is a cost-effective solution to GNSS inadequacy for DSRC applications. It requires no major additional equipment if DSRC and GPS are already installed. CP utilizes the information of the range measurements between the nodes with unknown positions and/or nodes which know their position with some uncertainty (e.g. by using a GPS, see Figure 1). CP is an information fusion approach to the collective network localization problem by using the DSRC channel both as a localization medium (i.e. used for ranging between the nodes with unknown positions) and an information exchange medium for these nodes’ kinematic information. CP techniques for VANET can provide position information for vehicles not equipped with GPS (Parker and Valaee, Reference Parker and Valaee2006), or vehicles that are in GPS satellite blockaded areas such as deep valleys of city centres (Efatmaneshnik et al., Reference Efatmaneshnik, Kealy, Lim and Dempster2010a).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081309-57251-mediumThumb-S0373463311000610_fig1g.jpg?pub-status=live)
Figure 1. CP concept. Arrows indicate measured ranges.
This paper presents three novel improvements to an already established Multi-Dimensional Scaling (MDS) algorithm to make it suitable for vehicular networks:
• First, a novel covariance estimation algorithm for non-classic MDS is proposed.
• Secondly, based on this covariance estimation, a Maximum Likelihood (ML) filter is devised for mobile networks by means of the MDS algorithm.
• Thirdly, a specific fusion of a Map-Matching (MM) with MDS algorithm is presented that significantly lowers the number of iterations compared with the standard non-classic MDS.
Note that any localization algorithm for VANET, because of rapid topological changing characteristics of the vehicular environments, needs to be computationally justifiable.
In Section 1.1, ranging techniques for VANET are introduced, which is followed by a review of various network localization approaches and a formal description of network localization problem. In Section 2.1 the non-classic MDS algorithm is formulated and then our technique for covariance estimation of non-classic MDS is introduced. Section 2.2 presents our novel fast MDS covariance estimation. In Section 2.3 the MDSF algorithm is introduced. In Section 3, some benchmarks for the Multi-Dimensional Scaling Filter (MDSF) algorithm are introduced including an Extended Kalman Filter (EKF) and the Cramar Rao Lower Bound (CRLB). In Section 4 a simulation scenario and comparison results with EKF are demonstrated. The robustness of both algorithms to nodes failure, resulting from GPS unavailability is determined is discussed in section 4.3. Section 5 introduces the blend of the MM technique with MDS.
1.1. Ranging For VANET
There are several different methods of radio ranging including Received Signal Strength (RSS), Time of Arrival (TOA), Time Difference of Arrival (TDOA), and Round Trip Time (RTT). Angle of Arrival (AOA) is a technique to determine the bearing of coming signal. It is not considered for CP for high levels of error due to multipath channels in urban areas, and the extra size and complexity required of the antenna (Sakagami et al., Reference Sakagami, Aoyama, Kuboi, Shirota and Akeyama1992). For DSRC-based vehicular communication, three main challenges exist: synchronization, limited bandwidth, and a noisy environment. An analysis of ranging and range-rating techniques shows that there are limited feasible choices for radio ranging in a vehicular environment with a suitable accuracy for CP purposes (Alam et al., Reference Alam, Balaei and Dempster2010).
Ranging with RSS is very popular due to its simplicity and lower deployment costs compared to other methods but it is very inaccurate. RSS accuracy for rough street conditions can be as low as a 100–500 m error (Alam et al., Reference Alam, Balaei and Dempster2010). In a vehicular network, providing synchronization between nodes may be technically very difficult and expensive (Alam et al., Reference Alam, Balaei and Dempster2010). Avoiding the necessity of synchronization, RTT is a solution for ranging between transmitter and receiver based on round trip signal propagation time. In this method, one node (say node 1) transmits a packet at time t1 to another one (say node 2) and waits for acknowledgement which is received at time t2. A drawback of RTT-based ranging is the necessity of the numerous measurements to achieve better ranging resolutions through averaging the estimates. This results in latency of the distance estimation and more bandwidth occupation. The first leads to the ranging error for mobile nodes and the other decreases the number of nodes which can use a common channel.
Time-based techniques measure signal propagation time between transmitter and receiver. For TOA, synchronization between the transmitter and receiver is necessary. In TDOA, the difference between the arrival times at a receiver of signals from two synchronized transmitters is considered for distance estimation. Also, the difference between the arrival times of a signal in different synchronized receivers from a single source can be used for TDOA. Time based methods are more complex and expensive than RSS because of the synchronization requirement. Regarding the proposed formulation for the variance of TOA-based ranging in Patwari et al. (Reference Patwari, Ash, Kyperountas, Hero, Moses and Correal2005), Figure 2 shows the best achievable performance for TOA-based ranging in a multipath-free channel in terms of bandwidth and Signal to Noise Ratio (SNR) based on simulation. As can be seen, higher SNR and wider bandwidth results in more accurate ranging. Also, sub-metre ranging accuracy is achievable, provided the nodes are synchronized and there is not any multipath. Note that for TDOA, assuming independent estimates of the arrival time of the received signals, ranging variance will be twice that of the TOA-based method. Thus TOA has performance advantage over TDOA, but it needs synchronized nodes. The achieved accuracy achieved with TOA is promising. The effect of ranging error on the performance of a CP algorithm is explained more in the next section.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081311-07418-mediumThumb-S0373463311000610_fig2g.jpg?pub-status=live)
Figure 2. Effect of bandwidth and SNR on TOA ranging error based on simulation.
1.2. VANET Positioning
VANET is a form of Mobile Adhoc NETworks (MANET) and thus any localization scheme devised for the latter can be applied to the former. In general the network localization problem has been studied under two interrelated main topics: static network localization referred to as Wireless Adhoc Networks (WAN) and MANET localization. In this paper the localization problem is formulated in a 2-D (two-dimensional) plane to simplify the simulation and the respective calculations, as is a common practice in positioning algorithm development (e.g. Chen et al., Reference Chen, Martins, Huang, So and Sezaki2008; Costa et al., Reference Costa, Patwari, Alfred and Hero2006; Efatmaneshnik et al., Reference Efatmaneshnik, Kealy, Tabatabaei Balaei and Dempster2011; Parker and Valaee, Reference Parker and Valaee2007). Note that since all the subsequent equations are symmetric with respect to x and y directions a 3-D formulation for real world applications can be easily obtained by extending the 2-D formulation. For a formal description of WAN localization, consider a set of n nodes in a two dimensional space with X=[X1, …, XN], Xi∈R2, and Xi=[xi,yi] the position coordinates of node i. Let σPi2 be the directional Standard Deviation (STD) of measured position Xi (i.e. the error STD in one direction, e.g. x or y). Thus given Xi as the position measurement of node i we have:
![$${\rm X}_{\rm i} = {\overline{\rm X}}_{\rm i} + \left[ {\matrix{ {{\rm e}_{{\rm x}_{\rm i}}} & {{\rm e}_{{\rm y}_{\rm i}}} \cr}} \right]$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn1.gif?pub-status=live)
Here, are the position measurement errors and
the ground truth position of node i. Some nodes may not have a priori position information which means for that node
. Conversely there might be perfect information about some nodes' positions (we call them beacons) which means
. In this equation the coupling between position at x and y is neglected, leading to a more general model than one with covariance information. This uncoupled model is often used for respective equation simplification purposes and is often considered when the GPS positions are seen as an output of an estimation engine (Efatmaneshnik et al., 2010b; Parker and Valaee, Reference Parker and Valaee2007).
The localization problem in WAN is the estimation of coordinate matrix X given the observed inter-node distances/range between each pair of nodes i, j. In MDS literature, a special arrangement of these measurements into a matrix form is called a dissimilarity matrix, and is defined as Δ=[δi, j] i, j∈{1, …, n}. Δ is a semi-definite matrix with zero diagonal and we assume that it is symmetric:
![$${\rm \delta} _{{\rm i},{\rm j}} = {\rm \delta} _{{\rm j},{\rm i}} = {\rm \overline \delta} _{{\rm i},{\rm j}} + {{\rm e}_{\rm R}_{{\rm i},{\rm j}}} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn2.gif?pub-status=live)
where:
is the range measurement between a pair of nodes i, j.
is the ground truth range.
is the range measurement error.
Each inter-node range is normally measured twice by each of the nodes. Assuming that all the inter-node ranges are measured with the same standard error equal to, then the range error becomes
.
Given above formalities the network positioning problem is tackled by any standard estimation method such as Least Mean Squared Error (LMSE) estimation (Moore et al., Reference Moore, Leonard, Rus and Teller2004), ML estimation (Weiss and Picard, Reference Weiss and Picard2008), or Maximum A Posteriori (MAP) estimation (Costa et al., Reference Costa, Patwari, Alfred and Hero2006). These cost functions can be minimized by a suitable optimization algorithm. The cost function and the optimization algorithm together form the localization algorithm. Monte Carlo Localization (Dellaert et al., Reference Dellaert, Fox, Burgard and Thrun1999), Convex Optimization (Doherty et al., Reference Doherty, Pister and Ghaoui2001), Iterative Multi-lateration (Tay et al., Reference Tay, Chandrasekhar and Seah2006) and MDS (Costa et al., Reference Costa, Patwari, Alfred and Hero2006) are the most popular of localization algorithms. Graph drawing is an interesting anchor-free network localization technique that yields a local or relative map of the nodes positions based on the network possibility problem (Nawaz and Jha, Reference Nawaz and Jha2007), but are usually centralized and computationally expensive.
For MANET there is additional information related to the mobility of the nodes that can be exploited. For MANET and VANET, a computational algorithm that facilitates tracking and fusion of spatial information across time is required that is referred to as a filter. Several mobility models that facilitate information fusion across time are discussed in Gustafsson and Gunnarsson (Reference Gustafsson and Gunnarsson2005). These models differ in their assumption about motion information uncertainty that can be assumed either as uncertainty in the mobility (random walk model), uncertainty in velocity (velocity sensor model) or uncertainty in acceleration (random force model). The latter is usually used for inertial navigation systems. For VANET the velocity model is adequate and the measured node velocities can be used to filter the positions:
![$${\rm X}_{\rm t} {\rm = X}_{{\rm t - 1}} {\rm + V}_{{\rm t - 1}} {\rm \times t}_{\rm s} {\rm + w}_{{\rm t - 1}} {\rm \times t}_{\rm s} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn3.gif?pub-status=live)
where:
Xt and
are the position vectors of all nodes in a cluster at time t and t−1.
Vt−1 is the velocity vector of all nodes at time t.
wt−1 is a diagonal matrix, the elements of which have an uncertainty in the form of (wt−1)i,i∼N(0, ξt−12).
In this paper we assume that the vehicles are all equipped with GPS and the velocity measurements are also available for all vehicles. GPS signal carrier phase measurements are often used to derive the vehicle velocity, through classical Doppler equations (Kaplan, Reference Kaplan2005, chapter 2.5). Alternatively odometry builds an incremental model of the motion using measurements of the elementary wheel rotations (Bonnifait et al., Reference Bonnifait, Bouron, Crubille and Meizel2001; Stephen, Reference Stephen2000). Odometers are already available in the front and rear wheels of vehicles equipped with an Anti-lock Breaking Systems (ABS). The accuracy of the ABS velocity measurements is about 0·1 m/s but requires digital conversion.
2. MODIFIED MDS ALGORITHM FOR MOBILE NETWORKS
2.1. Non-classic MDS
MDS is a general approach to information visualization for exploring similarities or dissimilarities in data and is often regarded as a powerful graph drawing technique. An MDS algorithm starts with a matrix of item-item dissimilarities, and assigns a location to each item in N-dimensional space. MDS has found many applications in chemical modeling, economics, sociology, political science and, especially, mathematical psychology and behavioral sciences (Cox and Cox, Reference Cox and Cox1994). Classical MDS uses the eigenvectors related to the N largest eigenvalues of the dissimilarity matrix to construct the position graph/map of the nodes. There are many spectral variants of classical MDS for WAN and since it is a fast technique, can be applied to MANET as well. It is well known that the classical MDS optimizes the node positions in terms of LMSE but is very sensitive to noise in range measurements (Costa et al., Reference Costa, Patwari, Alfred and Hero2006). Additionally the Classical MDS requires all the dissimilarity measurements and has no tolerance for missing information. As such, many variants of MDS have been introduced using weighting schemes that reduce this sensitivity and increase the robustness to the missing dissimilarity information (Cox and Cox, Reference Cox and Cox1994). These variants of MDS, known as non-classic MDS, are based on iterative cost minimization schemes. The common theme with all these MDS algorithms is that they have iterative yet analytical solutions. The most famous non-classic MDS is based on the stress cost function (Cox and Cox, Reference Cox and Cox1994), a simplified version of which follows:
![$${\rm S}^{\rm k} = \sum\limits_{{\rm i} = 1}^{\rm n} {\left[ {\displaystyle{{\left\| {\overline {\rm X}_{\rm i} - {\rm X}_{\rm i}^{\rm k}} \right\|^2} \over {{\rm \sigma} _{{\rm P}_{\rm i}} ^2}} + \sum\limits_{{\rm j = 1}}^{\rm n} {\left( {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}} - {\rm d}_{{\rm i},{\rm j}}} \over {{\rm \sigma} _{\rm R}^2}}} \right)} ^2} \right]} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn4.gif?pub-status=live)
Here Xiks are the nodes’ position after k iterations, is a priori position of the node i (e.g. from GPS), operator ‖ ‖ denotes the second norm, and the operator di,j gives the estimated inter-node distances/ranges between each pair of nodes i and j, given their estimated positions at iteration k, (Xik, Xjk), where Xik=[xik, yik], the distance is calculated as:
![$${\rm d}_{{\rm i},{\rm j}} = \sqrt {\left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm j}^{\rm k}} \right)^2 + \left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm j}^{\rm k}} \right)^2}$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn5.gif?pub-status=live)
The stress function is the log posterior distribution of the node positions given the dissimilarity observations log(X|Δ) (Costa et al., Reference Costa, Patwari, Alfred and Hero2006). Thus the optimization of stress is identical with the MAP estimation. Using both range measurements and initial position information, the MAP formulation also best suits characteristics of VANET in which nodes can be equipped with GPS providing a priori information about nodes/vehicles locations. Thus MAP estimation is potentially superior and more accurate than both the LMSE and ML network localization schemes (Desai and Tureli, Reference Desai and Tureli2007).
The analytical solution that minimizes S is derived via a majorizing function. According to definition T(X,Y) majorizes F(X) when T(X,Y)⩾F(X) for all Y and T(X, X)=F(X). The main idea here is that a complicated function can be minimized by the minimization of its majorizing function iteratively (Costa et al., Reference Costa, Patwari, Alfred and Hero2006). Starting at an initial condition X0, the function T(X, X0) is minimized as a function of X. The newly found minimum, X1, can then be used to define a new majorizing function T(X, X1) (Costa et al., Reference Costa, Patwari, Alfred and Hero2006). This process is then repeated until convergence is determined by the following condition:
![$${\rm S}^{\rm k} - {S}^{{\rm k + 1}} \lt {\rm \varepsilon} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn6.gif?pub-status=live)
The satisfaction of this criterion means that the increase in the cost function due to the latest iteration is less than a threshold or stated otherwise it means (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$$\displaystyle{{\partial {\rm S}^{\rm k}} \over {\partial {\rm X}}} = \sum\limits_{{\rm i} = 1}^{\rm n} {\displaystyle{{\partial {\rm S}^{\rm k}} \over {\partial {\rm X}_{\rm i}}} = 0} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn7.gif?pub-status=live)
The majorizing function is gained by rewriting the stress cost function in the following form (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$${\rm S} = {\rm \; \eta} _{\rm \delta} ^2 + {\rm \eta} ^2 - 2{\rm \rho} ({\rm X})$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn8.gif?pub-status=live)
where:
![$$\left\{ {\matrix{ {{\rm \eta} _{\rm \delta} ^2 = & \sum\limits_{{\rm i} = 1}^{\rm n} {\sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm i}}^2} \over {{\rm \sigma} _{\rm R}^2}}}} } \cr {{\rm \eta} ^2 = & \sum\limits_{{\rm i} = 1}^{\rm n} {\left[ {\displaystyle{{\left\| {{\rm X}_{\rm i} - {\rm \hat X}_{\rm i}^{\rm k}} \right\|^2} \over {{\rm \sigma} _{{\rm P}_{\rm i}} ^2}} + \sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm d}_{{\rm i},{\rm i}}^2} \over {{\rm \sigma} _{\rm R}^2}}}} \right]}} \cr {{\rm \rho} \left( {\rm X} \right) = & \sum\limits_{{\rm i} = 1}^{\rm n} {\sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}} {\rm d}_{{\rm i},{\rm j}}} \over {{\rm \sigma} _{\rm R}^2}}}}} \cr}} \right.$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn9.gif?pub-status=live)
Because the first two terms of the above are positive, only the last term in the majorizing function can be included (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$${\rm T}\left( {{\rm X},{\rm Y}} \right) = {\rm \eta} _{\rm \delta} ^2 + {\rm \eta} ^2 - 2{\rm \rho} ({\rm X},{\rm Y})$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn10.gif?pub-status=live)
where:
![$${\rm \rho} \left( {{\rm X},{\rm Y}} \right) = \sum\limits_{{\rm i} = 1}^{\rm n} {\sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}}} \over {{\rm \sigma} _{{\rm R}_{}} ^2}}}} \times \displaystyle{{\left( {{\rm X}_{\rm i} - {\rm X}_{\rm j}} \right)^{\rm T} ({\rm Y}_{\rm i} - {\rm Y}_{\rm j} )} \over {{\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right)}}$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn11.gif?pub-status=live)
Since Cauchy-Schwarz inequality states that (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$${\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right) = \displaystyle{{{\rm d}_{{\rm i},{\rm j}} \left( {\rm X} \right){\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right)} \over {{\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right)}} \geqslant \displaystyle{{\left( {{\rm X}_{\rm i} - {\rm X}_{\rm j}} \right)^{\rm T} ({\rm Y}_{\rm i} - {\rm Y}_{\rm j} )} \over {{\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right)}}$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn12.gif?pub-status=live)
Therefore T majorizes Sk or T(X,Y)⩾Sk. Minimizing Sk through a majorizing algorithm is now a simple task of finding the minimum of T (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$$\eqalign {\displaystyle{{\partial {\rm T}\left( {{\rm X},{\rm Y}} \right)} \over {\partial {\rm X}_{\rm i}}} = & {\rm X}_{\rm i} \left( {\displaystyle{{\rm n} \over {{\rm \sigma} _{\rm R}^2}} + \displaystyle{1 \over {{\rm \sigma} _{{\rm P}_{\rm i}} ^2}}} \right) - \sum\limits_{{\rm j = 1}}^{\rm n} {\displaystyle{{{\rm X}_{\rm j}} \over {{\rm \sigma} _{\rm R}^2}} - \displaystyle{{\overline {\rm X}_{\rm j}} \over {{\rm \sigma} _{\rm R}^2}} - \displaystyle{{{\rm Y}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}}} \over {{\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right)}} }}\cr & + \displaystyle{1 \over {{\rm \sigma} _{\rm R}^2}} \sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm Y}_{\rm j} {\rm \overline \delta} _{{\rm i},{\rm j}}} \over {{\rm d}_{{\rm i},{\rm j}} \left( {\rm Y} \right)}} = 0}}} $](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn13.gif?pub-status=live)
Based on this gradient a simple iterative-analytical solution is derived as (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$$\left\{ {\matrix{ {{\rm x}_{\rm i}^{{\rm k} + 1} = {\rm a}_{{\rm i}_{\rm x}} \left( {\displaystyle{{\overline {\rm X}_{\rm i}} \over {{\rm \sigma} _{\rm P}^2}} + {\rm x}_{}^{\rm k} {\rm b}_{\rm i}^{\rm k}} \right)} \cr {{\rm y}_{\rm i}^{{\rm k} + 1} = {\rm a}_{{\rm i}_{\rm y}} \left( {\displaystyle{{\overline {\rm X}_{\rm i}} \over {{\rm \sigma} _{\rm P}^2}} + {\rm y}_{}^{\rm k} {\rm b}_{\rm i}^{\rm k}} \right)} \cr}} \right.$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn14.gif?pub-status=live)
where xk=[x1k…,xnk], yk=[y1k…,ynk], and (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$$\left\{ {\matrix{ {{\rm a}_{{\rm i}_{\rm x}} = & \left( {\displaystyle{{\rm n} \over {\sigma _{\rm R} ^2}} + \displaystyle{1 \over {\sigma _{{\rm p}_{\rm x}} ^2}}} \right)^{ - 1}} \cr {{\rm a}_{{\rm i}_{\rm y}} = & \left( {\displaystyle{{\rm n} \over {\sigma _{\rm R} ^2}} + \displaystyle{1 \over {\sigma _{{\rm p}_{\rm y}} ^2}}} \right)^{ - 1}} \cr}} \right.$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn15.gif?pub-status=live)
The elements of bik=[b1,…,bn]T are defined by (Costa et al., Reference Costa, Patwari, Alfred and Hero2006):
![$$\left\{ {\matrix{ \eqalign{{\rm b}_{\rm j} = & \displaystyle{1 \over {{\rm \sigma} _{\rm R}^2}} \left( {1 - \displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}}} \over {{\rm d}_{{\rm i},{\rm j}}}}} \right), {\rm j} \ne {\rm i} \cr & {\rm b}_{\rm i} = \displaystyle{1 \over {{\rm \sigma} _{\rm R}^2}} \sum\limits_{{\rm j} = 1}^{\rm n} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}}} \over {{\rm d}_{{\rm i},{\rm j}}}}}} \cr }}\right. $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn16.gif?pub-status=live)
Note that and Xik+1 are 2×1 matrices, Xk is a 2×n matrix, and bik is n×1, where n is the number of nodes. The solution Xi is updated k times until Equation (6) is satisfied. One major disadvantage of the above MDS approach is that it does not give an estimate of the covariance of the state vector. In the next section we present a novel approach to estimate the covariance of the MDS solution. Note that the covariance estimation is necessary to enable filtering capabilities that increases the accuracy significantly for mobile networks.
2.2. State Covariance Estimation
In the last section we showed a way to minimize the cost function SK, after K iterations to arrive at the state estimation () of the column variable state X. A covariance estimation technique was presented in Cheung and So (Reference Cheung and So2005) for the estimated state by means of any cost function such as SK:
![$${\rm \hat P}^{\rm K} = {\rm \; E}\left[ {\left( {{\rm X} - {\rm \hat X}^{\rm K}} \right)\left( {{\rm X} - {\rm \hat X}^{\rm K}} \right)^{\rm T}} \right] = \left. {{\rm E}\left[ {\displaystyle{{\partial ^2 {\rm S}^{\rm K}} \over {\partial {\rm X}\partial {\rm X}^{\rm T}}}} \right]^{ - 1} {\rm E}\left[ {\left( {\displaystyle{{\partial {\rm S}^{\rm K}} \over {\partial {\rm X}}}} \right)\left( {\displaystyle{{\partial {\rm S}^{\rm K}} \over {\partial {\rm X}}}} \right)^{\rm T}} \right]{\rm E}\left[ {\displaystyle{{\partial ^2 {\rm S}^{\rm K}} \over {\partial {\rm X}\partial {\rm X}^{\rm T}}}} \right]^{ - 1}} \right|_{{\rm X} = {\rm \hat X}_{\rm k}} $](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn17.gif?pub-status=live)
However, this technique requires calculation of the Hessian and Jacobian matrices of SK relative to the state vector and also the inverse of the Hessian. From a computational complexity point of view this technique is not affordable, when considering the VANET fast computation criteria. Since the non-classic MDS algorithm provides us with an analytical solution, we can use that to estimate the covariance of the state at every iteration and pass that to the next iteration via a simple Taylor linearization. Considering all the measurements as constants in Equation (14), the state vector in iteration k+1 can be considered as a function of state in iteration k:
![$${\rm X}^{{\rm k} + 1} = {\rm f}\left( {{\rm X}^{\rm k}} \right)$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn18.gif?pub-status=live)
The function f can then be linearized by Taylor expansion around the estimated state :
![$${\rm X}^{{\rm k + 1}} = {\rm f}\left( {{\rm \hat X}^{\rm k}} \right) + {\rm A}^{\rm k} \left( {{\rm X}^{\rm k} - {\rm \hat X}^{\rm k}} \right)$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn19.gif?pub-status=live)
where Ak is the Jacobian of the function f:
![$${\rm A}^{\rm k} = \left. {\displaystyle{{\partial {\rm f\,(X}^{\rm k} )} \over {\partial {\rm X}^{\rm k}}}} \right|_{{\rm X}^{\rm k} {\rm = \hat X}^{\rm k}} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn20.gif?pub-status=live)
Organize XK as a 1×2n vector like this [x1k, … xnk, y1k, … ynk] and Ak is a 2n×2n matrix with elements in the following:
![$$\eqalign{& {\rm for\; i} \leqslant {\rm n},{\rm \; \;} \left[ {{\rm A}^{\rm k}} \right]_{{\rm i},{\rm j}} = \left\{ {\matrix{ {\displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \left( {1 + \displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}} \left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm j}^{\rm k}} \right)^2} \over {{\rm d} _{{\rm i},{\rm j}}}^3}} \right),{\rm j} \ne {\rm i},{\rm \; j} \leqslant {\rm n}} \cr { - \displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \left( {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}} \left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm j}^{\rm k}} \right)\left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm j}^{\rm k}} \right)} \over {{\rm d}_{{\rm i},{\rm j}}} ^3}} \right),{\rm j} \ne {\rm i},{\rm \; j} \geqslant {\rm n}} \cr {\matrix{ { - \displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \sum\limits_{{\rm l} \in \left\{ {1 \ldots {\rm n}} \right\} - \left\{ {\rm i} \right\}} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm l}} \left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm l}^{\rm k}} \right)^2} \over {{\rm d}_{{\rm i},{\rm l}}} ^3}, {\rm i} = {\rm j}}} \cr {\displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \mathop \sum \limits_{{\rm l} \in \left\{ {1 \ldots {\rm n}} \right\} - \left\{ {\rm i} \right\}} \displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm l}} \left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm l}^{\rm k}} \right)\left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm l}^{\rm k}} \right)} \over {{\rm d}_{{\rm i},{\rm l}}} ^3}, {\rm j} = {\rm i} + {\rm n}} \cr}} \cr}} \right. $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn21.gif?pub-status=live)
![$$\eqalign{& {\rm for\; i} \geqslant {\rm n},{\rm \; \;} \left[ {{\rm A}^{\rm k}} \right]_{{\rm i},{\rm j}} = \left\{ {\matrix{ { - \displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \left( {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}} \left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm j}^{\rm k}} \right)\left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm j}^{\rm k}} \right)} \over {{\rm d}_{{\rm i},{\rm j}}} ^3}} \right),{\rm j} \ne {\rm i},{\rm \; j} \leqslant {\rm n}} \cr {\displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \left( {1 + \displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm j}} \left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm j}^{\rm k}} \right)^2} \over {{\rm d}_{{\rm i},{\rm j}}} ^3}} \right),{\rm j} \ne {\rm i},{\rm \; j} \geqslant {\rm n}} \cr {\matrix{ { - \displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \mathop \sum \limits_{{\rm l} \in \left\{ {1 \ldots {\rm n}} \right\} - \left\{ {\rm i} \right\}} \displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm l}} \left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm l}^{\rm k}} \right)\left( {{\rm y}_{\rm i}^{\rm k} - {\rm y}_{\rm l}^{\rm k}} \right)} \over {{\rm d}_{{\rm i},{\rm l}}} ^3}, {\rm i} = {\rm j}} \cr {\displaystyle{{{\rm a}_{\rm i}} \over {{\rm \sigma} _{\rm R}^2}} \sum\limits_{{\rm l} \in \left\{ {1 \ldots {\rm n}} \right\} - \left\{ {\rm i} \right\}} {\displaystyle{{{\rm \overline \delta} _{{\rm i},{\rm l}} \left( {{\rm x}_{\rm i}^{\rm k} - {\rm x}_{\rm l}^{\rm k}} \right)^2} \over {{\rm d}_{{\rm i},{\rm l}}} ^3}, {\rm j} = {\rm i} + {\rm n}}} \cr}} \cr}} \right.} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn21a.gif?pub-status=live)
Since , by re-arranging (11) we have:
![$${\rm X}^{{\rm k} + 1} - {\rm \hat X}^{{\rm k} + 1}\! = {\rm A}^{\rm k} {\rm \;} ({\rm X}^{\rm k} - {\rm \hat X}^{\rm k} )$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn22.gif?pub-status=live)
Multiplying each side of this equation by its transpose and taking expected values leads to:
![$${\rm E}\left[ {\left( {{\rm X}^{{\rm k} + 1} - {\rm \hat X}^{{\rm k + 1}}} \right)\left( {{\rm X}^{{\rm k} + 1} - {\rm \hat X}^{{\rm k + 1}}} \right)^{\rm T}} \right] = {\rm A}^{\rm k} {\rm \; E}\left[ {({\rm X}^{\rm k} - {\rm \hat X}^{\rm k} )({\rm X}^{\rm k} - {\rm \hat X}^{\rm k} )^{\rm T}} \right]{\rm A}^{\rm k} ^{\rm T} $](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn23.gif?pub-status=live)
Consider Pk as the covariance of the state at iteration k, then from (15) Pk+1 is:
![$${\rm P}^{{\rm k} + 1} = {\rm A}^{\rm k} {\rm P}^{\rm k} {\rm A}^{{\rm k}^{\rm T}} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn24.gif?pub-status=live)
This is a recursive formula and we need to know P1 (i.e. the covariance of the position measurements) in order to estimate the state covariance of the last iteration. The presented technique for covariance estimation is better than that of (Cheung and So, Reference Cheung and So2005) for the specific case of non-classic MDS because it needs calculation of one Jacobian matrix while no matrix inversion is involved. So it is a better technique from the computational complexity point of view and better suited for VANET computations. Figure 3 shows the pseudo-code of the MDS algorithm together with state covariance estimation.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081312-96827-mediumThumb-S0373463311000610_fig3g.jpg?pub-status=live)
Figure 3. The pseudo-code of the MDS algorithm.
2.3. ML Estimation for MDS Filtering
Filters are per se estimation engines that use mobility models to refine the estimation. In simple terms this mobility model gives a second estimation of state vector Xt. Based on Equation (3) the positions and their covariances can be estimated as:
![$$\left\{ {\matrix{ {{\rm X}_{\rm t}^{\rm M} = {\rm X}_{{\rm t} - 1} + {\rm V}_{{\rm t} - 1} {\rm t}_{\rm s} } \cr {{\rm P}_{\rm t}^{\rm M} = {\rm P}_{{\rm t} - 1} + {\rm diag}\left( {1_{\rm n} {\rm \xi} _{\rm t}^2} \right)} \cr}} \right.$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn25.gif?pub-status=live)
1n is a vector of all ones with size n, and Vt−1 is the velocity vector of all nodes at time t−1 and ξt2 is the variance of velocity measurements.
The estimation derived from the MDS algorithm, and
, and that derived from the mobility model,
and
are next combined together by ML estimation technique leading to an overall estimation of state Xt and its covariance Pt. Consider the following simple model:
![$${\rm Y} = {\rm H} \times {\rm X}_{\rm t} + {\rm \kappa} $](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn26.gif?pub-status=live)
where:
![$${\rm \; \kappa} \sim {\rm N}\left( {0_{4{\rm n} \times 4{\rm n}}, {\rm R}} \right)$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqnU21.gif?pub-status=live)
![$${\rm and}\!:\; {\rm R} = \left[ {\matrix{ {{\rm \hat P}_{\rm t}^{\rm K}} & {0_{2{\rm n} \times 2{\rm n}}} \cr {0_{2{\rm n} \times 2{\rm n}}} & {{\rm \hat P}_{\rm t}^{\rm M}} \cr}} \right],{\rm and \ Y} = \left[\openup 1pt {\matrix{ {{\rm X}_{\rm t}^{\rm K}} \cr {{\rm X}_{\rm t}^{\rm M}} \cr}} \right].$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqnU22.gif?pub-status=live)
The ML estimation is based on the assumption that the two contributing estimated positions (from MDS and mobility model) are unbiased and have normal distributions
![$$\left\{ {\matrix{ {{\rm X}_{\rm t}^{\rm K} \sim {\rm N}({\rm \hat X}_{\rm t}^{\rm K}, {\rm \hat P}_{\rm t}^{\rm K} )} \cr {{\rm X}_{\rm t}^{\rm M} \sim {\rm N} ({\rm \hat X}_{\rm t}^{\rm M}, {\rm \hat P}_{\rm t}^{\rm M} )} \cr}} \right. $](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn27.gif?pub-status=live)
Finally, the maximum likelihood estimation of state Xt and its covariance Pt are:
![$$\left\{\eqalign{{\rm X}_{\rm t} = &\ ({\rm H}^{\rm T} {\rm R}^{ - 1} {\rm H})^{ - 1} \; {\rm H}^{\rm T} {\rm R}^{ - 1} Y\; \cr & \quad \quad {\rm P}_{\rm t} = ({\rm H}^{\rm T} {\rm R}^{ - 1} {\rm H})^{ - 1}} \right.$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn28.gif?pub-status=live)
![$${\rm where \ H} = \left[ {\matrix{ {{\rm I}_{2{\rm n} \times 2{\rm n}}} \cr {{\rm I}_{2{\rm n} \times 2{\rm n}}} \cr}} \right].$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqnU23.gif?pub-status=live)
The MDS Filter (MDSF) algorithm is depicted in the pseudo-code at Figure 4.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081317-77496-mediumThumb-S0373463311000610_fig4g.jpg?pub-status=live)
Figure 4. Pseudo-code of the MDSF algorithm.
3. BENCHMARKS
In this section an EKF algorithm and CRLB for CP are introduced. The performance of MDSF is then compared to that of EKF together with the CP benchmark in Section 4.
3.1. EKF for CP
A Kalman Filter (KF) is an optimal state, least squares recursive estimation process and is applied to a stochastic dynamic system such as a vehicular network (Chui and Chen, Reference Chui and Chen1987). KF is a linear, unbiased and minimum error variance recursive estimation algorithm for the state of a dynamic system that uses noisy measurements taken at discrete real-time intervals (Chui and Chen, Reference Chui and Chen1987). The KF process is as follows. Consider the velocity mobility model in Equation (3) as the dynamic state model. The range measurements are then considered in a measurement model:
![$$\matrix{ \eqalign{{\rm X}_{\rm t} = & {\rm X}_{{\rm t} - 1} + {\rm V}_{{\rm t} - 1} {\rm t}_{\rm k} + {\rm w}_{{\rm t} - 1} {\rm t}_{\rm t} \cr & {\rm Z}_{\rm t} = {\rm g}({\rm X}_{\rm t} ) + {\rm \eta}_{\rm t}} \cr } $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn29.gif?pub-status=live)
where:
Xt is a vector containing the position coordinates of all nodes in a cluster.
Zt is the vector of measured ranges at instant t in a column vector.
The process noise wt and measurement noise ηt are independent zero mean Gaussian white noise with covariance matrices respectively described by Qt and Rt. Since for CP the relationship between Zt and Xt is not linear, a linear Taylor transformation is required:
![$${\rm C}_{\rm t} = \left. {\displaystyle{{\partial {\rm g}({\rm X})} \over {\partial {\rm X}}}} \right|_{{\rm X} = {\rm X}_{{\rm t}|{\rm t} - 1}} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn30.gif?pub-status=live)
A KF with this kind of linearization is known as EKF with the following solution:
![$$\eqalign{& {\rm X}_{{\rm t}|{\rm t} - 1} = {\rm A}_{{\rm t} - 1} {\rm X}_{{\rm t} - 1|{\rm t} - 1} \cr & {\rm P}_{{\rm t|t} - 1} = {\rm A}_{{\rm t} - 1} {\rm P}_{{\rm t} - 1|{\rm t} - 1} {\rm A}_{{\rm t} - 1}^{\rm T} + \Gamma _{{\rm t} - 1} {\rm Q}_{{\rm t} - 1} \Gamma _{{\rm t} - 1}^{\rm T} \cr & {\rm G}_{\rm t} = {\rm A}_{{\rm t} - 1} {\rm C}_{\rm t} ^{\rm T} ({\rm C}_{\rm t} {\rm P}_{{\rm t|t} - 1} {\rm C}_{\rm t}^{\rm T} + {\rm R}_{\rm t} )^{ - 1} \cr & {\rm X}_{{\rm t|t}} = {\rm X}_{{\rm t}|{\rm t} - 1} + {\rm G}_{\rm t} ({\rm Z}_{\rm t} - {\rm C}_{\rm t} {\rm X}_{{\rm t}|{\rm t} - 1} ) \cr & {\rm P}_{{\rm t|t}} = (1 - {\rm G}_{\rm t} {\rm C}_{\rm t} ){\rm P}_{{\rm t}|{\rm t} - 1}} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn31.gif?pub-status=live)
At and Γt are the unity matrices. The process noise Qt for CP case becomes a diagonal matrix with variances of velocity measurements on its diagonal. If we consider two sources of measurement (the GPS and DSRC ranges) then the measurement vector contains the GPS positions of all vehicles, and the measured ranges between pairs of vehicles (Efatmaneshnik et al., Reference Efatmaneshnik, Kealy, Tabatabaei Balaei and Dempster2011). The measurement noise matrix Rt is then a diagonal matrix with variance of GPS measurements and range measurements on its diagonal (Efatmaneshnik et al., Reference Efatmaneshnik, Kealy, Tabatabaei Balaei and Dempster2011).
3.2. Cramar Rao Lower Bound (CRLB)
CRLB sets the lower bound of the variance of any unbiased estimator. Assuming that ΣP is the covariance matrix of the a priori position information X (e.g. provided by GPS), a CRLB for CP can be introduced as Efatmaneshnik et al. (Reference Efatmaneshnik, Kealy, Tabatabaei Balaei and Dempster2011):
![$${\rm C} = \left( {\sum\limits_{\rm p}^{ - 1} { + {\rm F(X|}\delta {\rm )}}} \right)^{ - 1} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn32.gif?pub-status=live)
where F(X|δ) is the fisher information matrix that is a function of the ground truth positions and the ranging STD (Efatmaneshnik et al., Reference Efatmaneshnik, Kealy, Tabatabaei Balaei and Dempster2011). The recursive CRLB for mobile networks (Efatmaneshnik et al., Reference Efatmaneshnik, Kealy, Tabatabaei Balaei and Dempster2011) becomes:
![$${\rm C}_{{\rm t} + 1} = \left( {({\rm C}_{\rm t} + {\rm T}_{\rm t} \Sigma _{\rm v} )^{ - 1} + \sum\limits_{\rm P}^{ - 1} { + {\rm F(X|}\delta {\rm )}}} \right)^{ - 1} $$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn33.gif?pub-status=live)
where:
Ct+1 and Ct are the CRLB matrices at instants t+1 and t.
Tt is the scalar elapsed time.
Σv is the diagonal covariance matrix of the measured velocity of vehicles.
From Equation (33) it is clear that noisier velocity measurement, thus larger σv2, leads to a smaller contribution from the previous instant's position information and if it is too large, filtering has no effect on the accuracy. An indicator of CP error for each node over a period of time (T) can be readily obtained based on Circular Error Probability (CEP) for each instant EtCRLB and as an average over the simulation run time ECRLB:
![$$\matrix{ \eqalign{{\rm E}_{\rm t}^{{\rm CRLB}} = & \sqrt {\left( {{\rm \sigma} _{{\rm x}_{\rm i}} ^{\rm t} ^{^2} + {\rm \sigma} _{{\rm y}_{\rm i}} ^{\rm t} ^{^2}} \right)} \cr {\rm E}^{{\rm CRLB}} = & \mathop \sum \limits_{{\rm t} = 1}^{\rm T} \displaystyle{{{\rm E}_{\rm t}^{\rm c}} \over {\rm T}}} \cr }}$$](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn34.gif?pub-status=live)
where and
respectively are the estimated error variance of node i at time t in directions x and y.
4. SIMULATION AND RESULTS
In this Section, first a road traffic network simulation is presented, and then the performances of MDSF and EKF are compared together and with CRLB. Also the effect of GPS outages on CP performance is discussed via the simulation.
4.1. Road Traffic Simulation Setup
A simulation study was set up to demonstrate the performance of MDSF as a CP algorithm. A road traffic network was simulated by Netlogo™, an agent-based and Java-based programming platform. Figure 5 is a snapshot of the traffic network. The length of the horizontal road was 2 km. From this simulation the ground truth values of the vehicles’ positions were obtained. The rest of the calculations were performed in the MATLAB™ environment. Four traffic conditions were generated and controlled by the total number of vehicles in the road traffic network simulation. The considered traffic rates were 300 vehicles/hour in the main streets for sparse/fast traffic, 600 vehicles/hour, 900 vehicles/hour and 1200 vehicles/hour for heavy/slow traffic. In all four cases a target vehicle travelled a similar path of about 2 km. The simulation time in each case depended on the traffic conditions (e.g. for sparse traffic it took 350 s to travel the same route as the heavy traffic in about 550 s). The DSRC range was set at 250 m. Table 1 shows the average numbers of neighbouring vehicles over a simulation run for each of the traffic conditions for the particular DSRC range.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_fig5g.jpeg?pub-status=live)
Figure 5. Netlogo agent based software was used for the traffic simulation.
Table 1. The characteristic of different traffic conditions.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_tab1.gif?pub-status=live)
4.2. Performance Results
Based on CRLB performance metrics in Equation (34), Figure 6a shows the expected error of CP for heavy traffic conditions. The errors are averaged over the simulation run time. The ranging STD, , varied between 1 m and 10 m as shown on the curves; the STD of velocity measurement varied from 0·1 m/s to 2·1 m/s. It can be seen that CP error has high sensitivity to the velocity error. Better than 1 m accuracies are typical when the velocity measurement has a STD error of about 0·1 m/s. Note that this velocity error margin can be met with ABS based odometers (Bonnifait et al., Reference Bonnifait, Bouron, Crubille and Meizel2001; Stephen, Reference Stephen2000). Figure 6b shows the typical CP error in all types of traffic condition when the ranging STD is 10 m. This figure is obtained from Equation (33) to show that the velocity measurement accuracy contributes more to the CP accuracy than the density of traffic condition.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081426-80202-mediumThumb-S0373463311000610_fig6g.jpg?pub-status=live)
Figure 6. (a). Error bounds based on CRLB for heavy traffic conditions with different ranging STD. (b) Average CRLB based error over the simulation run with in different traffic conditions).
Two performance indicators can be used to demonstrate the performances of MDSF and EKF, a time series and an average performance over the simulation run time for the target vehicle (node i):
![$$\eqalign{{\rm E}_{\rm t} = & \sqrt {{\rm \sigma} _{{\rm x}_{\rm i}} ^{\rm t} ^{^2} + {\rm \sigma} _{{\rm y}_{\rm i}} ^{\rm t} ^{^2}} \cr {\rm E} = & \mathop \sum \limits_{{\rm t} = 1}^{\rm T} \displaystyle{{{\rm E}_{\rm T}} \over {\rm T}}} $](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_eqn36.gif?pub-status=live)
where:
Xit and
respectively are the estimated coordinate vector and the ground truth coordinates of the target vehicle at time t.
T is the simulation run time that varied for different traffic conditions.
In order to compare the produced error with the CRLB the simulation was repeated 500 times in MATLAB™. Then E and Et were averaged over the number of simulation runs. This is necessary because CRLB sets the asymptotic bound for minimum variance estimation.
Figure 7a shows the superior performance of MDSF compared to the EKF algorithm and CRLB for heavy traffic condition. On average the MDSF has a better performance of about 0·2 m relative to EKF in heavy traffic that is a significant improvement. Figure 7b shows the performance of MDSF, EKF and CP in terms of CRLB for all traffic conditions when averaged over time. The MDSF leads to less CP error in all traffic conditions compared to EKF.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081531-73049-mediumThumb-S0373463311000610_fig7g.jpg?pub-status=live)
Figure 7. (a). The performance comparison of EKF and MDSF with CRLB for heavy traffic condition. (b). The performance comparison of the EKF and MDSF with CRLB for all types of traffic.)
4.3. Effect of GPS Dropout
GPS outages are frequent and abundant in the Central Business District (CBD) areas where deep urban canyons mask the line of sight of the GPS satellites to the extent that there might remain less than four visible satellites, making any estimation of the position impossible. Within urban and other difficult environments, the position estimation can also be variably biased due to multipath effect. This is when reflection of the satellite signal is received, and the delay caused by the reflection creates bias in the position. A set of GPS data collected on 25/11/2009 in the Sydney CBD with a typical Ublox GPS receiver showed a maximum error of 90 m from the centre line of the road and several outages in the form of masked areas (Figure 8). Most of the masked areas are segments of the road of less than 25 m length and only a few are more than 35 m long. The total distance of the travelled path shown on the map is 2·7 km. Most of the outages cover very short lengths of the streets, and the longer the length of the masked area the lower its likelihood is. The total number of outages that have lengths greater than 15 m are 25 which means there are roughly 10 outages of those lengths in every kilometre of the CBD area. Based on these observations, the reliability and integrity of GNSS positioning is quite dubious and disastrous for DSRC safety applications, necessitating the utilization of CP techniques.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081537-79243-mediumThumb-S0373463311000610_fig8g.jpg?pub-status=live)
Figure 8. The trajectory of the collected GPS data in Sydney CBD area.
A simulation was set up to demonstrate the robustness of CP system and the MSDF algorithm to GPS outages. In this simulation, ten outages of 15 m to 35 m long were randomly allocated in every kilometre of the simulated road network. If a vehicle entered the outage zones, its position error deviation was set at infinity (or a very large number), which literally means the absence of position information in those zones. Figure 9a compares the performance MDSF, EKF and CRLB in the presence of GPS outages for heavy traffic rate. In this figure when the dashed line has a value of zero it means an outage is taking place for the target vehicle. Note that outage is applied to any neighbouring vehicle that is in the outage zone. Also note that the number of neighbouring vehicles that are in one outage zone with the target vehicle at any time varies on average depends on the traffic rate. For heavy traffic rate on average five vehicles entered the outage zone with the target vehicle. The figure shows that both MSDF and EKF are robust and stable in the presence of outages. It is also clear that the effect of outages is not significant because the CRLB curve is very smooth, and sharp hikes due to unavailability of the position for the target vehicle are not seen on it. Thus CP is proved to be an effective system to gap GPS outages. Figure 9b shows the increase in average CRLB based CP error relative to the no-outage case in figure 6b. From this figure it is evident that even for sparse traffic, the GPS outages do not have significant effect on the CP error (less that 10 cm in worst case).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_fig9g.jpeg?pub-status=live)
Figure 9. (a). Performance of MDSF and EKF compared with each other and with CRLB in the presence of outages for heavy traffic rate. (b). The increase in CP error for ranging error σR=10 m due to 10 outages of 15 m to 35 m in every kilometre of the road for different traffic rates).
5. FAST CONVERGENCE WITH TIGHT MAP FUSION
Many in-car navigation systems offer CD-ROM map databases from which directional information is provided to the driver (Kealy et al., Reference Kealy, Scott-Young and Collier2004). These databases possess intelligent information that can be incorporated either as constraints to the MDSF solution or integrated directly as measurements within the filter. Based on the assumption that the vehicle is travelling on a road, which is typically the case, a simple constraint can be imposed on the location solution to immediately improve the visual accuracy of the computed position of the vehicle. This is called the snap-to-center-line MM rule. So the state estimate of MDSF at time t, , can be corrected by bringing the solutions of all nodes to the nearest point on the center line of the road. Note that the existence of multiple lanes negatively affects the effectiveness of this technique.
The improvement of performance by using the MM technique is quite obvious. However, the fast reduction of the MDSF computational complexity and convergence time with a special blend of MM technique is non-intuitive. The computational complexity of the non-classic MDS is O(n×L), where L is the total number of iterations required until the stopping rule is satisfied. For controlling the number of iterations, one can either select it directly in the algorithm or change the stopping criteria. A smaller stopping criterion leads to a higher number of iterations. Another way to reduce the number of iterations and thus the computational complexity is to apply the MM technique at the end of each iteration to as shown in Figure 10. This means that the output of each iteration is corrected by MM, thus MM is used K times, where K is the total number of iterations at each instance. This tight MM fusion with MDSF slightly increases the computational burden and has little or no positive effect on the accuracy relative to when the MM rule is applied at the end of all K iterations. However, the tight MM fusion greatly reduces the total number of iterations (K) to O(n). The reason for this is that iterations of the MDS algorithm improve a previous estimate of the last iteration. The improvement of the state estimate at iteration k+1, for example, can obviously be made more efficient by applying MM at iteration k. In this way, the overall efficiency of the algorithm increases exponentially by each iteration, and this means that the algorithm's computational complexity is almost not a function of number of iterations, which again means that the number of iterations must be almost constant. Because the state covariance is not used in the state estimation process, the covariance need not be updated with the MM rule in every iteration and can be updated only once at the end of the last iteration k.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081310-90158-mediumThumb-S0373463311000610_fig10g.jpg?pub-status=live)
Figure 10. The pseudo-code of the MDSF with tight MM algorithm.
The results in Table 2 and Figure 11 demonstrate the iteration improvement and the frequency of iteration numbers from the heavy traffic simulation. Table 2 shows the mean and STD of the number of MDS iterations over the simulation run before and after MM fusing for the heavy traffic condition that has the highest number of nodes and for ε=0·001. The STD of MDSF algorithm is very low and the mean iterations number is improved by a factor of 20. Note the low variance of the number of iterations of MM and MDSF relative to the very high variability of that for when MM information is not fused. This means that the computational complexity of MDSF fused with MM is almost independent of the number of iterations. It should be noted that this performance improvement is for single lane roads in each direction. Obviously for roads with more than a single lane the iteration reduction cannot be as efficient as for the single lane roads used in our road network simulation model.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160626081307-37364-mediumThumb-S0373463311000610_fig11g.jpg?pub-status=live)
Figure 11. (a). The frequency of iteration numbers with similar stopping criteria for MDSF. (b). The frequency of iteration numbers with similar stopping criteria for MDSF with MM).
Table 2. The Mean and STD of the number of MDS iterations.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151022042325733-0238:S0373463311000610_tab2.gif?pub-status=live)
6. CONCLUSIONS
The emergence of infrastructure such as DSRC allows us to establish ad hoc vehicular networks in which range measurements between the moving vehicles can be quantified and included as part of the integrated positioning solution. This new information source together with GNSS system forms a platform for robust position estimation that can meet the strict performance requirements of a range of road safety systems and services. Cooperative Positioning (CP) is an important application of Dedicated Short Range Communication (DSRC) that has a crucial role for the reliability of safety-related applications, because it can provide consistent below metre positioning accuracy. CP was analysed with Cramar Rao Lower Bound (CRLB) and proved competent and reliable for the position information in vehicular environments. CP is a positioning solution at no extra cost and with no major implications to DSRC system developers. CP can potentially lead to reliable and consistent 1-metre level positioning accuracy. There are of course, challenges with ranging. Received Signal Strength (RSS) delivers really poor range estimates especially in rough vehicular environment, leaving Time of Arrival (TOA) the only viable option for CP ranging. TOA needs synchronization of DSRC boards that is an ongoing research agenda.
In addition to effective ranging, CP requires an efficient positioning algorithm. A modification of non-classic Multi-Dimensional Scaling (MDS) for positioning of vehicles in a cooperative manner was presented. The modifications bear two improvements to the non-classic MDS to suit the vehicular networks: filtering capability for moving nodes and fast convergence, which are both important in the topologically fast changing vehicular environment.
Map information fusion and filtering capabilities were introduced. A novel approach to state covariance estimation was presented, which makes possible the filtering of node positions over time and leads to better and smoother position solutions.
It was shown via simulation that a special blend of map information and iterative MDS algorithm leads to low computational complexity. The MDSF algorithm with Map-Matching (MM) has almost same computational complexity as an Extended Kalman Filter (EKF) but with better performance. A Netlogo™ road network simulation platform was used to demonstrate the performance of MDSF. Four different traffic conditions were considered, varying between slow moving (heavy) and fast (sparse). The simulation demonstrated a better performance for MDSF relative to EKF in all traffic conditions of about 20 cm on average.
Typical GPS outage patterns were studied and applied in the simulation model. Then the robustness of both MDSF and EKF algorithms, and generally the robustness of CP via CRLB, to GPS outages in the presence of high rise buildings were examined. Both MDSF and EKF proved to gap the GPS outages flawlessly.