835 resultados para Idealized model for theory development
Resumo:
This thesis deals with inflation theory, focussing on the model of Jarrow & Yildirim, which is nowadays used when pricing inflation derivatives. After recalling main results about short and forward interest rate models, the dynamics of the main components of the market are derived. Then the most important inflation-indexed derivatives are explained (zero coupon swap, year-on-year, cap and floor), and their pricing proceeding is shown step by step. Calibration is explained and performed with a common method and an heuristic and non standard one. The model is enriched with credit risk, too, which allows to take into account the possibility of bankrupt of the counterparty of a contract. In this context, the general method of pricing is derived, with the introduction of defaultable zero-coupon bonds, and the Monte Carlo method is treated in detailed and used to price a concrete example of contract. Appendixes: A: martingale measures, Girsanov's theorem and the change of numeraire. B: some aspects of the theory of Stochastic Differential Equations; in particular, the solution for linear EDSs, and the Feynman-Kac Theorem, which shows the connection between EDSs and Partial Differential Equations. C: some useful results about normal distribution.
Resumo:
In this PhD thesis the crashworthiness topic is studied with the perspective of the development of a small-scale experimental test able to characterize a material in terms of energy absorption. The material properties obtained are then used to validate a nu- merical model of the experimental test itself. Consequently, the numerical model, calibrated on the specific ma- terial, can be extended to more complex structures and used to simulate their energy absorption behavior. The experimental activity started at University of Washington in Seattle, WA (USA) and continued at Second Faculty of Engi- neering, University of Bologna, Forl`ı (Italy), where the numerical model for the simulation of the experimental test was implemented and optimized.
Resumo:
Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.
Resumo:
The evaluation of the farmers’ communities’ approach to the Slow Food vision, their perception of the Slow Food role in supporting their activity and their appreciation and expectations from participating in the event of Mother Earth were studied. The Unified Theory of Acceptance and Use of Technology (UTAUT) model was adopted in an agro-food sector context. A survey was conducted, 120 questionnaires from farmers attending the Mother Earth in Turin in 2010 were collected. The descriptive statistical analysis showed that both Slow Food membership and participation to Mother Earth Meeting were much appreciated for the support provided to their business and the contribution to a more sustainable and fair development. A positive social, environmental and psychological impact on farmers also resulted. Results showed also an interesting perspective on the possible universality of the Slow Food and Mother Earth values. Farmers declared that Slow Food is supporting them by preserving the biodiversity and orienting them to the use of local resources and reducing the chemical inputs. Many farmers mentioned the language/culture and administration/bureaucratic issues as an obstacle to be a member in the movement and to participate to the event. Participation to Mother Earth gives an opportunity to exchange information with other farmers’ communities and to participate to seminars and debates, helpful for their business development. The absolute majority of positive answers associated to the farmers’ willingness to relate to Slow Food and participate to the next Mother Earth editions negatively influenced the UTAUT model results. A factor analysis showed that the variables associated to the UTAUT model constructs Performance Expectancy and Effort Expectancy were consistent, able to explain the construct variability, and their measurement reliable. Their inclusion in a simplest Technology Acceptance Model could be considered in future researches.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
Rett's Syndrome (RTT) is a severe neurodevelopmental disorder, characterized by cognitive disability that appears in the first months/years of life. Recently, mutations in the X-linked cyclin-dependent kinase-like 5 (CDKL5) gene have been detected in RTT patients characterized by early-onset seizures. CDKL5 is highly expressed in the brain starting from early postnatal stages to adulthood, suggesting the importance of this kinase for proper brain maturation and function. However, the role/s of CDKL5 in brain development and the molecular mechanisms whereby CDKL5 exerts its effects are still largely unknown. In order to characterize the role of CDKL5 on brain development, we created a mice carrying a targeted conditional knockout allele of Cdkl5. A first behavioral characterization shows that Cdkl5 knockout mice recapitulate several features that mimic the clinical features described in CDKL5 patients and are a useful tool to investigate phenotypic and functional aspects of Cdkl5 loss. We used the Cdkl5 knockout mouse model to dissect the role of CDKL5 on hippocampal development and to establish the mechanism/s underlying its actions. We found that Cdkl5 knockout mice showed increased precursor cell proliferation in the hippocampal dentate gyrus. Interestingly, this region was also characterized by an increased rate of apoptotic cell death that caused a reduction in the final neuron number in spite of the proliferation increase. Moreover, loss of Cdkl5 led to decreased dendritic development of new generated granule cells. Finally, we identified the Akt/GSK3-beta signaling as a target of Cdkl5 in the regulation of neuronal precursor proliferation, survival and maturation. Overall our findings highlight a critical role of CDKL5/AKT/GSK3-beta signaling in the control of neuron proliferation, survival and differentiation and suggest that CDKL5-related alterations of these processes during brain development underlie the neurological symptoms of the CDKL5 variant of RTT.
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
Resumo:
The dissertation contains five parts: An introduction, three major chapters, and a short conclusion. The First Chapter starts from a survey and discussion of the studies on corporate law and financial development literature. The commonly used methods in these cross-sectional analyses are biased as legal origins are no longer valid instruments. Hence, the model uncertainty becomes a salient problem. The Bayesian Model Averaging algorithm is applied to test the robustness of empirical results in Djankov et al. (2008). The analysis finds that their constructed legal index is not robustly correlated with most of the various stock market outcome variables. The second Chapter looks into the effects of minority shareholders protection in corporate governance regime on entrepreneurs' ex ante incentives to undertake IPO. Most of the current literature focuses on the beneficial part of minority shareholder protection on valuation, while overlooks its private costs on entrepreneur's control. As a result, the entrepreneur trade-offs the costs of monitoring with the benefits of cheap sources of finance when minority shareholder protection improves. The theoretical predictions are empirically tested using panel data and GMM-sys estimator. The third Chapter investigates the corporate law and corporate governance reform in China. The corporate law in China regards shareholder control as the means to the ends of pursuing the interests of stakeholders, which is inefficient. The Chapter combines the recent development of theories of the firm, i.e., the team production theory and the property rights theory, to solve such problem. The enlightened shareholder value, which emphasizes on the long term valuation of the firm, should be adopted as objectives of listed firms. In addition, a move from the mandatory division of power between shareholder meeting and board meeting to the default regime, is proposed.
Resumo:
Since the publication of the book of Russell and Burch in 1959, scientific research has never stopped improving itself with regard to the important issue of animal experimentation. The European Directive 2010/63/EU “On the protection of animals used for scientific purposes” focuses mainly on the animal welfare, fixing the Russell and Burch’s 3Rs principles as the foundations of the document. In particular, the legislator clearly states the responsibility of the scientific community to improve the number of alternative methods to animal experimentation. The swine is considered a species of relevant interest for translational research and medicine due to its biological similarities with humans. The surgical community has, in fact, recognized the swine as an excellent model replicating the human cardiovascular system. There have been several wild-type and transgenic porcine models which were produced for biomedicine and translational research. Among these, the cardiovascular ones are the most represented. The continuous involvement of the porcine animal model in the biomedical research, as the continuous advances achieved using swine in translational medicine, support the need for alternative methods to animal experimentation involving pigs. The main purpose of the present work was to develop and characterize novel porcine alternative methods for cardiovascular translational biology/medicine. The work was mainly based on two different models: the first consisted in an ex vivo culture of porcine aortic cylinders and the second consisted in an in vitro culture of porcine aortic derived progenitor cells. Both the models were properly characterized and results indicated that they could be useful to the study of vascular biology. Nevertheless, both the models aim to reduce the use of experimental animals and to refine animal based-trials. In conclusion, the present research aims to be a small, but significant, contribution to the important and necessary field of study of alternative methods to animal experimentation.
Resumo:
Il CP-ESFR è un progetto integrato di cooperazione europeo sui reattori a sodio SFR realizzato sotto il programma quadro EURATOM 7, che unisce il contributo di venticinque partner europei. Il CP-ESFR ha l'ambizione di contribuire all'istituzione di una "solida base scientifica e tecnica per il reattore veloce refrigerato a sodio, al fine di accelerare gli sviluppi pratici per la gestione sicura dei rifiuti radioattivi a lunga vita, per migliorare le prestazioni di sicurezza, l'efficienza delle risorse e il costo-efficacia di energia nucleare al fine di garantire un sistema solido e socialmente accettabile di protezione della popolazione e dell'ambiente contro gli effetti delle radiazioni ionizzanti. " La presente tesi di laurea è un contributo allo sviluppo di modelli e metodi, basati sull’uso di codici termo-idraulici di sistema, per l’ analisi di sicurezza di reattori di IV Generazione refrigerati a metallo liquido. L'attività è stata svolta nell'ambito del progetto FP-7 PELGRIMM ed in sinergia con l’Accordo di Programma MSE-ENEA(PAR-2013). Il progetto FP7 PELGRIMM ha come obbiettivo lo sviluppo di combustibili contenenti attinidi minori 1. attraverso lo studio di due diverse forme: pellet (oggetto della presente tesi) e spherepac 2. valutandone l’impatto sul progetto del reattore CP-ESFR. La tesi propone lo sviluppo di un modello termoidraulico di sistema dei circuiti primario e intermedio del reattore con il codice RELAP5-3D© (INL, US). Tale codice, qualificato per il licenziamento dei reattori nucleari ad acqua, è stato utilizzato per valutare come variano i parametri del core del reattore rilevanti per la sicurezza (es. temperatura di camicia e di centro combustibile, temperatura del fluido refrigerante, etc.), quando il combustibile venga impiegato per “bruciare” gli attinidi minori (isotopi radioattivi a lunga vita contenuti nelle scorie nucleari). Questo ha comportato, una fase di training sul codice, sui suoi modelli e sulle sue capacità. Successivamente, lo sviluppo della nodalizzazione dell’impianto CP-ESFR, la sua qualifica, e l’analisi dei risultati ottenuti al variare della configurazione del core, del bruciamento e del tipo di combustibile impiegato (i.e. diverso arricchimento di attinidi minori). Il testo è suddiviso in sei sezioni. La prima fornisce un’introduzione allo sviluppo tecnologico dei reattori veloci, evidenzia l’ambito in cui è stata svolta questa tesi e ne definisce obbiettivi e struttura. Nella seconda sezione, viene descritto l’impianto del CP-ESFR con attenzione alla configurazione del nocciolo e al sistema primario. La terza sezione introduce il codice di sistema termico-idraulico utilizzato per le analisi e il modello sviluppato per riprodurre l’impianto. Nella sezione quattro vengono descritti: i test e le verifiche effettuate per valutare le prestazioni del modello, la qualifica della nodalizzazione, i principali modelli e le correlazioni più rilevanti per la simulazione e le configurazioni del core considerate per l’analisi dei risultati. I risultati ottenuti relativamente ai parametri di sicurezza del nocciolo in condizioni di normale funzionamento e per un transitorio selezionato sono descritti nella quinta sezione. Infine, sono riportate le conclusioni dell’attività.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.