993 resultados para Inverse methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se presenta un análisis en profundidad de cómo se deben utilizar dos tipos de métodos directos, Lucas-Kanade e Inverse Compositional, en imágenes RGB-D y se analiza la capacidad y precisión de los mismos en una serie de experimentos sintéticos. Estos simulan imágenes RGB, imágenes de profundidad (D) e imágenes RGB-D para comprobar cómo se comportan en cada una de las combinaciones. Además, se analizan estos métodos sin ninguna técnica adicional que modifique el algoritmo original ni que lo apoye en su tarea de optimización tal y como sucede en la mayoría de los artículos encontrados en la literatura. Esto se hace con el fin de poder entender cuándo y por qué los métodos convergen o divergen para que así en el futuro cualquier interesado pueda aplicar los conocimientos adquiridos en esta tesis de forma práctica. Esta tesis debería ayudar al futuro interesado a decidir qué algoritmo conviene más en una determinada situación y debería también ayudarle a entender qué problemas le pueden dar estos algoritmos para poder poner el remedio más apropiado. Las técnicas adicionales que sirven de remedio para estos problemas quedan fuera de los contenidos que abarca esta tesis, sin embargo, sí se hace una revisión sobre ellas.---ABSTRACT---This thesis presents an in-depth analysis about how direct methods such as Lucas- Kanade and Inverse Compositional can be applied in RGB-D images. The capability and accuracy of these methods is also analyzed employing a series of synthetic experiments. These simulate the efects produced by RGB images, depth images and RGB-D images so that diferent combinations can be evaluated. Moreover, these methods are analyzed without using any additional technique that modifies the original algorithm or that aids the algorithm in its search for a global optima unlike most of the articles found in the literature. Our goal is to understand when and why do these methods converge or diverge so that in the future, the knowledge extracted from the results presented here can efectively help a potential implementer. After reading this thesis, the implementer should be able to decide which algorithm fits best for a particular task and should also know which are the problems that have to be addressed in each algorithm so that an appropriate correction is implemented using additional techniques. These additional techniques are outside the scope of this thesis, however, they are reviewed from the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquifers are a vital water resource whose quality characteristics must be safeguarded or, if damaged, restored. The extent and complexity of aquifer contamination is related to characteristics of the porous medium, the influence of boundary conditions, and the biological, chemical and physical processes. After the nineties, the efforts of the scientists have been increased exponentially in order to find an efficient way for estimating the hydraulic parameters of the aquifers, and thus, recover the contaminant source position and its release history. To simplify and understand the influence of these various factors on aquifer phenomena, it is common for researchers to use numerical and controlled experiments. This work presents some of these methods, applying and comparing them on data collected during laboratory, field and numerical tests. The work is structured in four parts which present the results and the conclusions of the specific objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The shape of a plane acoustical sound-soft obstacle is detected from knowledge of the far field pattern for one time-harmonic incident field. Two methods based on solving a system of integral equations for the incoming wave and the far field pattern are investigated. Properties of the integral operators required in order to apply regularization, i.e. injectivity and denseness of the range, are proved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Previous experimental models suggest that vitamin E may ameliorate periodontitis. However, epidemiologic studies show inconsistent evidence in supporting this plausible association. Objective: We aimed to investigate the association between serum α-tocopherol (αT) and γ-tocopherol (γT) and periodontitis in a large cross-sectional US population. Methods: This study included 4708 participants in the 1999–2001 NHANES. Serum tocopherols were measured by HPLC and values were adjusted by total cholesterol (TC). Periodontal status was assessed by mean clinical attachment loss (CAL) and probing pocket depth (PPD). Total periodontitis (TPD) was defined as the sum of mild, moderate, and severe periodontitis. All measurements were performed by NHANES. Results: Means ± SDs of serum αT:TC ratio from low to high quartiles were 4.0 ± 0.4, 4.8 ± 0.2, 5.7 ± 0.4, and 9.1 ± 2.7 μmol/mmol. In multivariate regression models, αT:TC quartiles were inversely associated with mean CAL (P-trend = 0.06), mean PPD (P-trend < 0.001), and TPD (P-trend < 0.001) overall. Adjusted mean differences (95% CIs) between the first and fourth quartile of αT:TC were 0.12 mm (0.03, 0.20; P-difference = 0.005) for mean CAL and 0.12 mm (0.06, 0.17; P < 0.001) for mean PPD, whereas corresponding OR for TPD was 1.65 (95% CI: 1.26, 2.16; P-difference = 0.001). In a dose-response analysis, a clear inverse association between αT:TC and mean CAL, mean PPD, and TPD was observed among participants with relatively low αT:TC. No differences were seen in participants with higher αT:TC ratios. Participants with γT:TC ratio in the interquartile range showed a significantly lower mean PPD than those in the highest quartile. Conclusions: A nonlinear inverse association was observed between serum αT and severity of periodontitis, which was restricted to adults with normal but relatively low αT status. These findings warrant further confirmation in longitudinal or intervention settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In linear communication channels, spectral components (modes) defined by the Fourier transform of the signal propagate without interactions with each other. In certain nonlinear channels, such as the one modelled by the classical nonlinear Schrödinger equation, there are nonlinear modes (nonlinear signal spectrum) that also propagate without interacting with each other and without corresponding nonlinear cross talk, effectively, in a linear manner. Here, we describe in a constructive way how to introduce such nonlinear modes for a given input signal. We investigate the performance of the nonlinear inverse synthesis (NIS) method, in which the information is encoded directly onto the continuous part of the nonlinear signal spectrum. This transmission technique, combined with the appropriate distributed Raman amplification, can provide an effective eigenvalue division multiplexing with high spectral efficiency, thanks to highly suppressed channel cross talk. The proposed NIS approach can be integrated with any modulation formats. Here, we demonstrate numerically the feasibility of merging the NIS technique in a burst mode with high spectral efficiency methods, such as orthogonal frequency division multiplexing and Nyquist pulse shaping with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 4.5 dB, which is comparable to results achievable with multi-step per span digital back propagation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation from experimental data, of physical quantities, which enter into the electromagnetic Maxwell equations, is described as inverse optical problem. The functional relations between the dependent and independent variables are of transcendental character and numeric procedures for evaluation of the unknowns are largely used. Herein, we discuss a direct approach to the solution, illustrated by a specific example of determination of thin films optical constants from spectrophotometric data. New algorithm is proposed for the parameters evaluation, which does not need an initial guess of the unknowns and does not use iterative procedures. Thus we overcome the intrinsic deficiency of minimization techniques, such as gradient search methods, Simplex methods, etc. The price of it is a need of more computing power, but our algorithm is easily implemented in structures such as grid clusters. We show the advantages of this approach and its potential for generalization to other inverse optical problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive critic methods have common roots as generalizations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, nonlinear and nonstationary environments. In this study, a novel probabilistic dual heuristic programming (DHP) based adaptive critic controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) adaptive critic method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterized by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the critic network is then calculated and shown to be equal to the analytically derived correct value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 65H10.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear Fourier transform (NFT) and eigenvalue communication with the use of nonlinear signal spectrum (both discrete and continuous), have been recently discussed as promising transmission methods to combat fiber nonlinearity impairments. In this paper, for the first time, we demonstrate the generation, detection and transmission performance over transoceanic distances of 10 Gbaud and nonlinear inverse synthesis (NIS) based signal (4 Gb/s line rate), in which the transmitted information is encoded directly onto the continuous part of the signal nonlinear spectrum. By applying effective digital signal processing techniques, a reach of 7344 km was achieved with a bit-error-rate (BER) (2.1×10-2) below the 20% FEC threshold. This represents an improvement by a factor of ~12 in data capacity x distance product compared with other previously demonstrated NFT-based systems, showing a significant advance in the active research area of NFT-based communication systems.