886 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
Integer carrier phase ambiguity resolution is the key to rapid and high-precision global navigation satellite system (GNSS) positioning and navigation. As important as the integer ambiguity estimation, it is the validation of the solution, because, even when one uses an optimal, or close to optimal, integer ambiguity estimator, unacceptable integer solution can still be obtained. This can happen, for example, when the data are degraded by multipath effects, which affect the real-valued float ambiguity solution, conducting to an incorrect integer (fixed) ambiguity solution. Thus, it is important to use a statistic test that has a correct theoretical and probabilistic base, which has became possible by using the Ratio Test Integer Aperture (RTIA) estimator. The properties and underlying concept of this statistic test are shortly described. An experiment was performed using data with and without multipath. Reflector objects were placed surrounding the receiver antenna aiming to cause multipath. A method based on multiresolution analysis by wavelet transform is used to reduce the multipath of the GPS double difference (DDs) observations. So, the objective of this paper is to compare the ambiguity resolution and validation using data from these two situations: data with multipath and with multipath reduced by wavelets. Additionally, the accuracy of the estimated coordinates is also assessed by comparing with the ground truth coordinates, which were estimated using data without multipath effects. The success and fail probabilities of the RTIA were, in general, coherent and showed the efficiency and the reliability of this statistic test. After multipath mitigation, ambiguity resolution becomes more reliable and the coordinates more precise. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
This Ph.D thesis focuses on iterative regularization methods for regularizing linear and nonlinear ill-posed problems. Regarding linear problems, three new stopping rules for the Conjugate Gradient method applied to the normal equations are proposed and tested in many numerical simulations, including some tomographic images reconstruction problems. Regarding nonlinear problems, convergence and convergence rate results are provided for a Newton-type method with a modified version of Landweber iteration as an inner iteration in a Banach space setting.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
We treat two related moving boundary problems. The first is the ill-posed Stefan problem for melting a superheated solid in one Cartesian coordinate. Mathematically, this is the same problem as that for freezing a supercooled liquid, with applications to crystal growth. By applying a front-fixing technique with finite differences, we reproduce existing numerical results in the literature, concentrating on solutions that break down in finite time. This sort of finite-time blow-up is characterised by the speed of the moving boundary becoming unbounded in the blow-up limit. The second problem, which is an extension of the first, is proposed to simulate aspects of a particular two-phase Stefan problem with surface tension. We study this novel moving boundary problem numerically, and provide results that support the hypothesis that it exhibits a similar type of finite-time blow-up as the more complicated two-phase problem. The results are unusual in the sense that it appears the addition of surface tension transforms a well-posed problem into an ill-posed one.
Resumo:
Carrier phase ambiguity resolution over long baselines is challenging in BDS data processing. This is partially due to the variations of the hardware biases in BDS code signals and its dependence on elevation angles. We present an assessment of satellite-induced code bias variations in BDS triple-frequency signals and the ambiguity resolutions procedures involving both geometry-free and geometry-based models. First, since the elevation of a GEO satellite remains unchanged, we propose to model the single-differenced fractional cycle bias with widespread ground stations. Second, the effects of code bias variations induced by GEO, IGSO and MEO satellites on ambiguity resolution of extra-wide-lane, wide-lane and narrow-lane combinations are analyzed. Third, together with the IGSO and MEO code bias variations models, the effects of code bias variations on ambiguity resolution are examined using 30-day data collected over the baselines ranging from 500 to 2600 km in 2014. The results suggest that although the effect of code bias variations on the extra-wide-lane integer solution is almost ignorable due to its long wavelength, the wide-lane integer solutions are rather sensitive to the code bias variations. Wide-lane ambiguity resolution success rates are evidently improved when code bias variations are corrected. However, the improvement of narrow-lane ambiguity resolution is not obvious since it is based on geometry-based model and there is only an indirect impact on the narrow-lane ambiguity solutions.
Resumo:
A means of assessing the effectiveness of methods used in the numerical solution of various linear ill-posed problems is outlined. Two methods: Tikhonov' s method of regularization and the quasireversibility method of Lattès and Lions are appraised from this point of view.
In the former method, Tikhonov provides a useful means for incorporating a constraint into numerical algorithms. The analysis suggests that the approach can be generalized to embody constraints other than those employed by Tikhonov. This is effected and the general "T-method" is the result.
A T-method is used on an extended version of the backwards heat equation with spatially variable coefficients. Numerical computations based upon it are performed.
The statistical method developed by Franklin is shown to have an interpretation as a T-method. This interpretation, although somewhat loose, does explain some empirical convergence properties which are difficult to pin down via a purely statistical argument.
Resumo:
In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.
Resumo:
Eye-tracking was used to examine how younger and older adults use syntactic and semantic information to disambiguate noun/verb (NV) homographs (e.g., park). We find that young adults exhibit inflated first fixations to NV-homographs when only syntactic cues are available for disambiguation (i.e., in syntactic prose). This effect is eliminated with the addition of disambiguating semantic information. Older adults (60+) as a group fail to show the first fixation effect in syntactic prose; they instead reread NV homographs longer. This pattern mirrors that in prior event-related potential work (Lee & Federmeier, 2009, 2011), which reported a sustained frontal negativity to NV-homographs in syntactic prose for young adults, which was eliminated by semantic constraints. The frontal negativity was not observed in older adults as a group, although older adults with high verbal fluency showed the young-like pattern. Analyses of individual differences in eye-tracking patterns revealed a similar effect of verbal fluency in both young and older adults: high verbal fluency groups of both ages show larger first fixation effects, while low verbal fluency groups show larger downstream costs (rereading and/or refixating NV homographs). Jointly, the eye-tracking and ERP data suggest that effortful meaning selection recruits frontal brain areas important for suppressing contextually inappropriate meanings, which also slows eye movements. Efficacy of fronto-temporal circuitry, as captured by verbal fluency, predicts the success of engaging these mechanisms in both young and older adults. Failure to recruit these processes requires compensatory rereading or leads to comprehension failures (Lee & Federmeier, in press).
Resumo:
A comparison is made of the performance of a weather Doppler radar with a staggered pulse repetition time and a radar with a random (but known) phase. As a standard for this comparison, the specifications of the forthcoming next generation weather radar (NEXRAD) are used. A statistical analysis of the spectral momentestimates for the staggered scheme is developed, and a theoretical expression for the signal-to-noise ratio due to recohering-filteringrecohering for the random phase radar is obtained. Algorithms for assignment of correct ranges to pertinent spectral moments for both techniques are presented.
Resumo:
The smooth DMS-FEM, recently proposed by the authors, is extended and applied to the geometrically nonlinear and ill-posed problem of a deformed and wrinkled/slack membrane. A key feature of this work is that three-dimensional nonlinear elasticity equations corresponding to linear momentum balance, without any dimensional reduction and the associated approximations, directly serve as the membrane governing equations. Domain discretization is performed with triangular prism elements and the higher order (C1 or more) interelement continuity of the shape functions ensures that the errors arising from possible jumps in the first derivatives of the conventional C0 shape functions do not propagate because the ill-conditioned tangent stiffness matrices are iteratively inverted. The present scheme employs no regularization and exhibits little sensitivity to h-refinement. Although the numerically computed deformed membrane profiles do show some sensitivity to initial imperfections (nonplanarity) in the membrane profile needed to initiate transverse deformations, the overall patterns of the wrinkles and the deformed shapes appear to be less so. Finally, the deformed profiles, computed through the DMS FEM-based weak formulation, are compared with those obtained through an experiment on an ultrathin Kapton membrane, wherein wrinkles form because of the applied boundary displacement conditions. Comparisons with a reported experiment on a rectangular membrane are also provided. These exercises lend credence to the feasibility of the DMS FEM-based numerical route to computing post-wrinkled membrane shapes. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]