938 resultados para inverse probability weights
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
La prima parte di questo lavoro di tesi tratta dell’interazione tra un bacino di laminazione e il sottostante acquifero: è in fase di progetto, infatti, la costruzione di una cassa di espansione sul torrente Baganza, a monte della città di Parma. L’obiettivo di tale intervento è di ridurre il rischio di esondazione immagazzinando temporaneamente, in un serbatoio artificiale, la parte più pericolosa del volume di piena che verrebbe rilasciata successivamente con portate che possono essere agevolmente contenute nel tratto cittadino del torrente. L’acquifero è stato preliminarmente indagato e monitorato permettendone la caratterizzazione litostratigrafica. La stratigrafia si può riassumere in una sequenza di strati ghiaioso-sabbiosi con successione di lenti d’argilla più o meno spesse e continue, distinguendo due acquiferi differenti (uno freatico ed uno confinato). Nel presente studio si fa riferimento al solo acquifero superficiale che è stato modellato numericamente, alle differenze finite, per mezzo del software MODFLOW_2005. L'obiettivo del presente lavoro è di rappresentare il sistema acquifero nelle condizioni attuali (in assenza di alcuna opera) e di progetto. La calibrazione è stata condotta in condizioni stazionarie utilizzando i livelli piezometrici raccolti nei punti d’osservazione durante la primavera del 2013. I valori di conducibilità idraulica sono stati stimati per mezzo di un approccio geostatistico Bayesiano. Il codice utilizzato per la stima è il bgaPEST, un software gratuito per la soluzione di problemi inversi fortemente parametrizzati, sviluppato sulla base dei protocolli del software PEST. La metodologia inversa stima il campo di conducibilità idraulica combinando osservazioni sullo stato del sistema (livelli piezometrici nel caso in esame) e informazioni a-priori sulla struttura dei parametri incogniti. La procedura inversa richiede il calcolo della sensitività di ciascuna osservazione a ciascuno dei parametri stimati; questa è stata valutata in maniera efficiente facendo ricorso ad una formulazione agli stati aggiunti del codice in avanti MODFLOW_2005_Adjoint. I risultati della metodologia sono coerenti con la natura alluvionale dell'acquifero indagato e con le informazioni raccolte nei punti di osservazione. Il modello calibrato può quindi essere utilizzato come supporto alla progettazione e gestione dell’opera di laminazione. La seconda parte di questa tesi tratta l'analisi delle sollecitazioni indotte dai percorsi di flusso preferenziali causati da fenomeni di piping all’interno dei rilevati arginali. Tali percorsi preferenziali possono essere dovuti alla presenza di gallerie scavate da animali selvatici. Questo studio è stato ispirato dal crollo del rilevato arginale del Fiume Secchia (Modena), che si è verificato in gennaio 2014 a seguito di un evento alluvionale, durante il quale il livello dell'acqua non ha mai raggiunto la sommità arginale. La commissione scientifica, la cui relazione finale fornisce i dati utilizzati per questo studio, ha attribuito, con molta probabilità, il crollo del rilevato alla presenza di tane di animali. Con lo scopo di analizzare il comportamento del rilevato in condizioni integre e in condizioni modificate dall'esistenza di un tunnel che attraversa il manufatto arginale, è stato realizzato un modello numerico 3D dell’argine mediante i noti software Femwater e Feflow. I modelli descrivono le infiltrazioni all'interno del rilevato considerando il terreno in entrambe le porzioni sature ed insature, adottando la tecnica agli elementi finiti. La tana è stata rappresentata da elementi con elevata permeabilità e porosità, i cui valori sono stati modificati al fine di valutare le diverse influenze sui flussi e sui contenuti idrici. Per valutare se le situazioni analizzate presentino o meno il verificarsi del fenomeno di erosione, sono stati calcolati i valori del fattore di sicurezza. Questo è stato valutato in differenti modi, tra cui quello recentemente proposto da Richards e Reddy (2014), che si riferisce al criterio di energia cinetica critica. In ultima analisi è stato utilizzato il modello di Bonelli (2007) per calcolare il tempo di erosione ed il tempo rimanente al collasso del rilevato.
Resumo:
Aquifers are a vital water resource whose quality characteristics must be safeguarded or, if damaged, restored. The extent and complexity of aquifer contamination is related to characteristics of the porous medium, the influence of boundary conditions, and the biological, chemical and physical processes. After the nineties, the efforts of the scientists have been increased exponentially in order to find an efficient way for estimating the hydraulic parameters of the aquifers, and thus, recover the contaminant source position and its release history. To simplify and understand the influence of these various factors on aquifer phenomena, it is common for researchers to use numerical and controlled experiments. This work presents some of these methods, applying and comparing them on data collected during laboratory, field and numerical tests. The work is structured in four parts which present the results and the conclusions of the specific objectives.
Resumo:
Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce two novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.
Resumo:
Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we apply two novel techniques to the problem of extracting the distribution of wind vector directions from radar catterometer data gathered by a remote-sensing satellite.
Resumo:
Most conventional techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three related techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.
Resumo:
Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.
Resumo:
Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.
Resumo:
Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about km800, carrying a C-band scatterometer. A scatterometer measures the amount of radar back scatter generated by small ripples on the ocean surface induced by instantaneous local winds. Operational methods that extract wind vectors from satellite scatterometer data are based on the local inversion of a forward model, mapping scatterometer observations to wind vectors, by the minimisation of a cost function in the scatterometer measurement space.par This report uses mixture density networks, a principled method for modelling conditional probability density functions, to model the joint probability distribution of the wind vectors given the satellite scatterometer measurements in a single cell (the `inverse' problem). The complexity of the mapping and the structure of the conditional probability density function are investigated by varying the number of units in the hidden layer of the multi-layer perceptron and the number of kernels in the Gaussian mixture model of the mixture density network respectively. The optimal model for networks trained per trace has twenty hidden units and four kernels. Further investigation shows that models trained with incidence angle as an input have results comparable to those models trained by trace. A hybrid mixture density network that incorporates geophysical knowledge of the problem confirms other results that the conditional probability distribution is dominantly bimodal.par The wind retrieval results improve on previous work at Aston, but do not match other neural network techniques that use spatial information in the inputs, which is to be expected given the ambiguity of the inverse problem. Current work uses the local inverse model for autonomous ambiguity removal in a principled Bayesian framework. Future directions in which these models may be improved are given.
Resumo:
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.
Resumo:
This paper contributes to extend the minimax disparity to determine the ordered weighted averaging (OWA) model based on linear programming. It introduces the minimax disparity approach between any distinct pairs of the weights and uses the duality of linear programming to prove the feasibility of the extended OWA operator weights model. The paper finishes with an open problem. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
In the last two decades there have been substantial developments in the mathematical theory of inverse optimization problems, and their applications have expanded greatly. In parallel, time series analysis and forecasting have become increasingly important in various fields of research such as data mining, economics, business, engineering, medicine, politics, and many others. Despite the large uses of linear programming in forecasting models there is no a single application of inverse optimization reported in the forecasting literature when the time series data is available. Thus the goal of this paper is to introduce inverse optimization into forecasting field, and to provide a streamlined approach to time series analysis and forecasting using inverse linear programming. An application has been used to demonstrate the use of inverse forecasting developed in this study. © 2007 Elsevier Ltd. All rights reserved.