21 resultados para non-linear dynamic system and DDoS

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the complex dynamics of beam-halo formation and evolution in circular particle accelerators is crucial for the design of current and future rings, particularly those utilizing superconducting magnets such as the CERN Large Hadron Collider (LHC), its luminosity upgrade HL-LHC, and the proposed Future Circular Hadron Collider (FCC-hh). A recent diffusive framework, which describes the evolution of the beam distribution by means of a Fokker-Planck equation, with diffusion coefficient derived from the Nekhoroshev theorem, has been proposed to describe the long-term behaviour of beam dynamics and particle losses. In this thesis, we discuss the theoretical foundations of this framework, and propose the implementation of an original measurement protocol based on collimator scans in view of measuring the Nekhoroshev-like diffusive coefficient by means of beam loss data. The available LHC collimator scan data, unfortunately collected without the proposed measurement protocol, have been successfully analysed using the proposed framework. This approach is also applied to datasets from detailed measurements of the impact on the beam losses of so-called long-range beam-beam compensators also at the LHC. Furthermore, dynamic indicators have been studied as a tool for exploring the phase-space properties of realistic accelerator lattices in single-particle tracking simulations. By first examining the classification performance of known and new indicators in detecting the chaotic character of initial conditions for a modulated Hénon map and then applying this knowledge to study the properties of realistic accelerator lattices, we tried to identify a connection between the presence of chaotic regions in the phase space and Nekhoroshev-like diffusive behaviour, providing new tools to the accelerator physics community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Restoring a correct implant kinematics and providing a good ligament balance and patellar tracking is mandatory to improve clinical and functional outcome after a Total Knee Replacement. Surgical navigation systems are a reliable and accurate tool to help the surgeon in achieving these goals. The aim of the present study was to use navigation system with an intra-operative surgical protocol to evaluate and determine an optimal implant kinematics during a Total Knee Replacement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Thesis a series of numerical models for the evaluation of the seasonal performance of reversible air-to-water heat pump systems coupled to residential and non-residential buildings are presented. The exploitation of the energy saving potential linked to the adoption of heat pumps is a hard task for designers due to the influence on their energy performance of several factors, like the external climate variability, the heat pump modulation capacity, the system control strategy and the hydronic loop configuration. The aim of this work is to study in detail all these aspects. In the first part of this Thesis a series of models which use a temperature class approach for the prediction of the seasonal performance of reversible air source heat pumps are shown. An innovative methodology for the calculation of the seasonal performance of an air-to-water heat pump has been proposed as an extension of the procedure reported by the European standard EN 14825. This methodology can be applied not only to air-to-water single-stage heat pumps (On-off HPs) but also to multi-stage (MSHPs) and inverter-driven units (IDHPs). In the second part, dynamic simulation has been used with the aim to optimize the control systems of the heat pump and of the HVAC plant. A series of dynamic models, developed by means of TRNSYS, are presented to study the behavior of On-off HPs, MSHPs and IDHPs. The main goal of these dynamic simulations is to show the influence of the heat pump control strategies and of the lay-out of the hydronic loop used to couple the heat pump to the emitters on the seasonal performance of the system. A particular focus is given to the modeling of the energy losses linked to on-off cycling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is a compilation of 6 papers that the author has written together with Alberto Lanconelli (chapters 3, 5 and 8) and Hyun-Jung Kim (ch 7). The logic thread that link all these chapters together is the interest to analyze and approximate the solutions of certain stochastic differential equations using the so called Wick product as the basic tool. In the first chapter we present arguably the most important achievement of this thesis; namely the generalization to multiple dimensions of a Wick-Wong-Zakai approximation theorem proposed by Hu and Oksendal. By exploiting the relationship between the Wick product and the Malliavin derivative we propose an original reduction method which allows us to approximate semi-linear systems of stochastic differential equations of the Itô type. Furthermore in chapter 4 we present a non-trivial extension of the aforementioned results to the case in which the system of stochastic differential equations are driven by a multi-dimensional fraction Brownian motion with Hurst parameter bigger than 1/2. In chapter 5 we employ our approach and present a “short time” approximation for the solution of the Zakai equation from non-linear filtering theory and provide an estimation of the speed of convergence. In chapters 6 and 7 we study some properties of the unique mild solution for the Stochastic Heat Equation driven by spatial white noise of the Wick-Skorohod type. In particular by means of our reduction method we obtain an alternative derivation of the Feynman-Kac representation for the solution, we find its optimal Hölder regularity in time and space and present a Feynman-Kac-type closed form for its spatial derivative. Chapter 8 treats a somewhat different topic; in particular we investigate some probabilistic aspects of the unique global strong solution of a two dimensional system of semi-linear stochastic differential equations describing a predator-prey model perturbed by Gaussian noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The catechol (1,2-dihydroxybenzene) is a privileged structural motif among natural antioxidants like flavonoids, owing to its reactivity with alkylperoxyl radicals due to the stability of the semiquinone radical. The exploration of the relevance and mechanism of this non-conventional antioxidant chemistry in heterogenous biomimetic systems (aqueous micelles and unilamellar liposomes) is explored for the first time in Chapter 1. Results show antioxidant behaviour that surpasses that of nature’s premiere antioxidant α-tocopherol and relies on the cross-dismutation of alkylperoxyl and hydroperoxyl radicals at the water-lipid interface with regeneration of the catechol function from the oxidized quinone. The design and synthesis of new biomimetic catechol-type antioxidants by conjugation of thiols (e.g. cysteine) with quinones highlighted an unusual 1,6-type regioselectivity, which had been previously reported but never fully rationalized. Owing to its importance both in nature and in the development of new antioxidants, we investigated it in detail in Chapter 2. We could prove the onsetting of a radical-chain mechanism intermediated by thiyl and thiosemiquinone radicals at the basis of the “anomalous nucleophilic addition” of thiols to ortho-quinones, which paves the way to better understanding of the chemistry of such systems. The oxidation of catechols to the corresponding quinones is also a key reaction in the biosynthesis of melanins, mediated by enzyme Tyrosinase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Italian radio telescopes currently undergo a major upgrade period in response to the growing demand for deep radio observations, such as surveys on large sky areas or observations of vast samples of compact radio sources. The optimised employment of the Italian antennas, at first constructed mainly for VLBI activities and provided with a control system (FS – Field System) not tailored to single-dish observations, required important modifications in particular of the guiding software and data acquisition system. The production of a completely new control system called ESCS (Enhanced Single-dish Control System) for the Medicina dish started in 2007, in synergy with the software development for the forthcoming Sardinia Radio Telescope (SRT). The aim is to produce a system optimised for single-dish observations in continuum, spectrometry and polarimetry. ESCS is also planned to be installed at the Noto site. A substantial part of this thesis work consisted in designing and developing subsystems within ESCS, in order to provide this software with tools to carry out large maps, spanning from the implementation of On-The-Fly fast scans (following both conventional and innovative observing strategies) to the production of single-dish standard output files and the realisation of tools for the quick-look of the acquired data. The test period coincided with the commissioning phase for two devices temporarily installed – while waiting for the SRT to be completed – on the Medicina antenna: a 18-26 GHz 7-feed receiver and the 14-channel analogue backend developed for its use. It is worth stressing that it is the only K-band multi-feed receiver at present available worldwide. The commissioning of the overall hardware/software system constituted a considerable section of the thesis work. Tests were led in order to verify the system stability and its capabilities, down to sensitivity levels which had never been reached in Medicina using the previous observing techniques and hardware devices. The aim was also to assess the scientific potential of the multi-feed receiver for the production of wide maps, exploiting its temporary availability on a mid-sized antenna. Dishes like the 32-m antennas at Medicina and Noto, in fact, offer the best conditions for large-area surveys, especially at high frequencies, as they provide a suited compromise between sufficiently large beam sizes to cover quickly large areas of the sky (typical of small-sized telescopes) and sensitivity (typical of large-sized telescopes). The KNoWS (K-band Northern Wide Survey) project is aimed at the realisation of a full-northern-sky survey at 21 GHz; its pilot observations, performed using the new ESCS tools and a peculiar observing strategy, constituted an ideal test-bed for ESCS itself and for the multi-feed/backend system. The KNoWS group, which I am part of, supported the commissioning activities also providing map-making and source-extraction tools, in order to complete the necessary data reduction pipeline and assess the general system scientific capabilities. The K-band observations, which were carried out in several sessions along the December 2008-March 2010 period, were accompanied by the realisation of a 5 GHz test survey during the summertime, which is not suitable for high-frequency observations. This activity was conceived in order to check the new analogue backend separately from the multi-feed receiver, and to simultaneously produce original scientific data (the 6-cm Medicina Survey, 6MS, a polar cap survey to complete PMN-GB6 and provide an all-sky coverage at 5 GHz).