911 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fredholm integral equations of the first kind are the mathematical model common to several electromagnetic, optical and acoustical inverse scattering problems. In most of these problems the solution must be positive in order to satisfy physical plausibility. We consider ill-posed deconvolution problems and investigate several linear regularization algorithms which provide positive approximate solutions at least in the absence of errors on the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction industry notoriously excels at dispute creation both in Ireland and abroad. This paper examines mediation in the Irish construction industry as a means of conflict and dispute resolution. It aims to identify success factors for appropriate competencies and processes required by mediators and other parties operating in the construction industry. Methodology includes a thorough review of the literature, followed by detailed interviews from industry experts to elicit and highlight the core competencies required. To aid in the analysis, qualitative analysis using mind mapping software was used. The findings suggest that facilitative mediation was best suited for the Irish construction industry. 13 and 17 success factors were identified as key skills necessary for a mediator and for a successful mediation process. For the skills, it ranges across behavioural, technical and intellectual skills. The mediation process factors can be split into actions of the mediator and other parties in the dispute. The results are similar to those identified in other countries and provide a good reference point for the development of the global construction industry. By following the findings of this report mediators and parties in dispute can improve processes and be more successful in mediation outcomes as a means of resolving conflicts and dispute.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data assimilation algorithms are a crucial part of operational systems in numerical weather prediction, hydrology and climate science, but are also important for dynamical reconstruction in medical applications and quality control for manufacturing processes. Usually, a variety of diverse measurement data are employed to determine the state of the atmosphere or to a wider system including land and oceans. Modern data assimilation systems use more and more remote sensing data, in particular radiances measured by satellites, radar data and integrated water vapor measurements via GPS/GNSS signals. The inversion of some of these measurements are ill-posed in the classical sense, i.e. the inverse of the operator H which maps the state onto the data is unbounded. In this case, the use of such data can lead to significant instabilities of data assimilation algorithms. The goal of this work is to provide a rigorous mathematical analysis of the instability of well-known data assimilation methods. Here, we will restrict our attention to particular linear systems, in which the instability can be explicitly analyzed. We investigate the three-dimensional variational assimilation and four-dimensional variational assimilation. A theory for the instability is developed using the classical theory of ill-posed problems in a Banach space framework. Further, we demonstrate by numerical examples that instabilities can and will occur, including an example from dynamic magnetic tomography.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wavelets are being extensively used in Geodetic applications. In this paper, the Multi-Resolution Analysis (MRA) using wavelets is applied to pseudorange and carrier phase GPS double differences (DDs) in order to reduce multipath effects. The wavelets were already applied to GPS carrier phase DDs, but some questions remain: How good can be the results, and are all multipath effects reduced? The answers to these questions are discussed in this paper. Thus, the wavelet transform is used to decompose the DD signals, splitting them in lower resolution components. After the decomposition process, the wavelet shrinkage is performed by thresholding to eliminate the components relative to multipath effects. Then, the DD observation can be reconstructed. This new DD signal is used to perform the baseline processing. The daily multipath repeatability was verified. With the application of the proposed approach, the results showed that the reliability of the ambiguity resolution and accuracy of the results improved when compared with the standard procedure. Furthermore, the method showed to be very efficient computationally, because, it is not noticed, at practical level, difference in the time span between the processing with and without application of the proposed method. However, only the high frequency multipath was eliminated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Enfermagem (mestrado profissional) - FMB

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) is an imaging technique that attempts to reconstruct the impedance distribution inside an object from the impedance between electrodes placed on the object surface. The EIT reconstruction problem can be approached as a nonlinear nonconvex optimization problem in which one tries to maximize the matching between a simulated impedance problem and the observed data. This nonlinear optimization problem is often ill-posed, and not very suited to methods that evaluate derivatives of the objective function. It may be approached by simulated annealing (SA), but at a large computational cost due to the expensive evaluation process of the objective function, which involves a full simulation of the impedance problem at each iteration. A variation of SA is proposed in which the objective function is evaluated only partially, while ensuring boundaries on the behavior of the modified algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Elektrische Impedanztomographie soll als kostengünstige und nebenwirkungsfreie Tomographiemethode in der medizinischen Diagnostik, z. B. in der Mammographie dienen. Mit der EIT läßt sich Krebsgewebe von gesundem Gewebe unterscheiden, da es eine signifikant erhöhte Leitfähigkeit aufweist. Damit kann die EIT als Ergänzung zu den klassischen Diagnoseverfahren dienen. So ist z.B. bei jungen Frauen mit einem dichteren Fettgewebe die Identifizierung eines Mammakarzinoms mit der Röntgentomographie nicht immer möglich. Ziel dieser Arbeit war es, einen Prototypen für die Impedanztomographie zu entwickeln und mögliche Anwendungen zu testen. Der Tomograph ist in Zusammenarbeit mit Dr. K.H.Georgi gebaut worden. Der Tomograph erlaubt es niederohmige, Wechselströme an Elektroden auf der Körperoberfläche einzuspeisen. Die Potentiale können an diesen Elektroden programmierbar vorgegeben werden. Weitere hochohmige Elektroden dienen zur Potentialmessung. Um den Hautwiderstand zu überbrücken, werden Wechselstromfrequenzen von 20-100 kHz eingesetzt. Mit der Möglichkeit der Messung von Strom und Potential auf unterschiedlichen Elektroden kann man das Problem des nur ungenau bekannten Hautwiderstandes umgehen. Prinzipiell ist es mit dem Mainzer EIT System möglich, 100 Messungen in der Sekunde durchzuführen. Auf der Basis von mit dem Mainzer EIT gewonnenen Daten sollten unterschiedliche Rekonstruktionsalgorithmen getestet und weiterentwickelt werden. In der Vergangenheit sind verschiedene Rekonstruktionsalgorithmen für das mathematisch schlecht gestellte EIT Problem betrachtet worden. Sie beruhen im Wesentlichen auf zwei Strategien: Die Linearisierung und iterative Lösung des Problems und Gebietserkennungsmethoden. Die iterativen Verfahren wurden von mir dahingehend modifiziert, dass Leitfähigkeitserhöhungen und Leitfähigkeitserniedrigungen gleichberechtigt behandelt werden können. Für den modifizierten Algorithmus wurden zwei verschiedene Rekonstruktionsalgorithmen programmiert und mit synthetischen Daten getestet. Zum einen die Rekonstruktion über die approximative Inverse, zum anderen eine Rekonstruktion mit einer Diskretisierung. Speziell für die Rekonstruktion mittels Diskretisierung wurde eine Methode entwickelt, mit der zusätzliche Informationen in der Rekonstruktion berücksichtigt werden können, was zu einer Verbesserung der Rekonstruktion beiträgt. Der Gebietserkennungsalgorithmus kann diese Zusatzinformationen liefern. In der Arbeit wurde ein neueres Verfahren für die Gebietserkennung derart modifiziert, dass eine Rekonstruktion auch für getrennte Strom- und Spannungselektroden möglich wurde. Mit Hilfe von Differenzdaten lassen sich ausgezeichnete Rekonstruktionen erzielen. Für die medizinischen Anwendungen sind aber Absolutmessungen nötig, d.h. ohne Leermessung. Der erwartende Effekt einer Inhomogenität in der Leitfähigkeit ist sehr klein und als Differenz zweier grosser Zahlen sehr schwierig zu bestimmen. Die entwickelten Algorithmen kommen auch gut mit Absolutdaten zurecht.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.