959 resultados para semi-parametric approach.
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
Includes bibliography
Resumo:
Pós-graduação em Matematica Aplicada e Computacional - FCT
Resumo:
Pós-graduação em Odontologia Restauradora - ICT
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Física - IFT
Resumo:
Aims. We construct a theoretical model to predict the number of orphan afterglows (OA) from gamma-ray bursts (GRBs) triggered by primordial metal-free (Pop III) stars expected to be observed by the Gaia mission. In particular, we consider primordial metal-free stars that were affected by radiation from other stars (Pop III. 2) as a possible target. Methods. We use a semi-analytical approach that includes all relevant feedback effects to construct cosmic star formation history and its connection with the cumulative number of GRBs. The OA events are generated using the Monte Carlo method, and realistic simulations of Gaia's scanning law are performed to derive the observation probability expectation. Results. We show that Gaia can observe up to 2.28 +/- 0.88 off-axis afterglows and 2.78 +/- 1.41 on-axis during the five-year nominal mission. This implies that a nonnegligible percentage of afterglows that may be observed by Gaia (similar to 10%) could have Pop III stars as progenitors.
Resumo:
[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
In this thesis we will investigate some properties of one-dimensional quantum systems. From a theoretical point of view quantum models in one dimension are particularly interesting because they are strongly interacting, since particles cannot avoid each other in their motion, and you we can never ignore collisions. Yet, integrable models often generate new and non-trivial solutions, which could not be found perturbatively. In this dissertation we shall focus on two important aspects of integrable one- dimensional models: Their entanglement properties at equilibrium and their dynamical correlators after a quantum quench. The first part of the thesis will be therefore devoted to the study of the entanglement entropy in one- dimensional integrable systems, with a special focus on the XYZ spin-1/2 chain, which, in addition to being integrable, is also an interacting model. We will derive its Renyi entropies in the thermodynamic limit and its behaviour in different phases and for different values of the mass-gap will be analysed. In the second part of the thesis we will instead study the dynamics of correlators after a quantum quench , which represent a powerful tool to measure how perturbations and signals propagate through a quantum chain. The emphasis will be on the Transverse Field Ising Chain and the O(3) non-linear sigma model, which will be both studied by means of a semi-classical approach. Moreover in the last chapter we will demonstrate a general result about the dynamics of correlation functions of local observables after a quantum quench in integrable systems. In particular we will show that if there are not long-range interactions in the final Hamiltonian, then the dynamics of the model (non equal- time correlations) is described by the same statistical ensemble that describes its statical properties (equal-time correlations).
Resumo:
To aid the design of organic semiconductors, we study the charge transport properties of organic liquid crystals, i.e. hexabenzocoronene and carbazole macrocycle, and single crystals, i.e. rubrene, indolocarbazole and benzothiophene derivatives (BTBT, BBBT). The aim is to find structure-property relationships linking the chemical structure as well as the morphology with the bulk charge carrier mobility of the compounds. To this end, molecular dynamics (MD) simulations are performed yielding realistic equilibrated morphologies. Partial charges and molecular orbitals are calculated based on single molecules in vacuum using quantum chemical methods. The molecular orbitals are then mapped onto the molecular positions and orientations, which allows calculation of the transfer integrals between nearest neighbors using the molecular orbital overlap method. Thus we obtain realistic transfer integral distributions and their autocorrelations. In case of organic crystals the differences between two descriptions of charge transport, namely semi-classical dynamics (SCD) in the small polaron limit and kinetic Monte Carlo (KMC) based on Marcus rates, are studied. The liquid crystals are investigated solely in the hopping limit. To simulate the charge dynamics using KMC, the centers of mass of the molecules are mapped onto lattice sites and the transfer integrals are used to compute the hopping rates. In the small polaron limit, where the electronic wave function is spread over a limited number of neighboring molecules, the Schroedinger equation is solved numerically using a semi-classical approach. The results are compared for the different compounds and methods and, where available, with experimental data. The carbazole macrocycles form columnar structures arranged on a hexagonal lattice with side chains facing inwards, so columns can closely approach each other allowing inter-columnar and thus three-dimensional transport. When taking only intra-columnar transport into account, the mobility is orders of magnitude lower than in the three-dimensional case. BTBT is a promising material for solution-processed organic field-effect transistors. We are able to show that, on the time-scales of charge transport, static disorder due to slow side chain motions is the main factor determining the mobility. The resulting broad transfer integral distributions modify the connectivity of the system but sufficiently many fast percolation paths remain for the charges. Rubrene, indolocarbazole and BBBT are examples of crystals without significant static disorder. The high mobility of rubrene is explained by two main features: first, the shifted cofacial alignment of its molecules, and second, the high center of mass vibrational frequency. In comparsion to SCD, only KMC based on Marcus rates is capable of describing neighbors with low coupling and of taking static disorder into account three-dimensionally. Thus it is the method of choice for crystalline systems dominated by static disorder. However, it is inappropriate for the case of strong coupling and underestimates the mobility of well-ordered crystals. SCD, despite its one-dimensionality, is valuable for crystals with strong coupling and little disorder. It also allows correct treatment of dynamical effects, such as intermolecular vibrations of the molecules. Rate equations are incapable of this, because simulations are performed on static snapshots. We have thus shown strengths and weaknesses of two state of the art models used to study charge transport in organic compounds, partially developed a program to compute and visualize transfer integral distributions and other charge transport properties, and found structure-mobility relations for several promising organic semiconductors.
Resumo:
Allgemein erlaubt adaptive Gitterverfeinerung eine Steigerung der Effizienz numerischer Simulationen ohne dabei die Genauigkeit des Ergebnisses signifikant zu verschlechtern. Es ist jedoch noch nicht erforscht, in welchen Bereichen des Rechengebietes die räumliche Auflösung tatsächlich vergröbert werden kann, ohne die Genauigkeit des Ergebnisses signifikant zu beeinflussen. Diese Frage wird hier für ein konkretes Beispiel von trockener atmosphärischer Konvektion untersucht, nämlich der Simulation von warmen Luftblasen. Zu diesem Zweck wird ein neuartiges numerisches Modell entwickelt, das auf diese spezielle Anwendung ausgerichtet ist. Die kompressiblen Euler-Gleichungen werden mit einer unstetigen Galerkin Methode gelöst. Die Zeitintegration geschieht mit einer semi-implizite Methode und die dynamische Adaptivität verwendet raumfüllende Kurven mit Hilfe der Funktionsbibliothek AMATOS. Das numerische Modell wird validiert mit Hilfe einer Konvergenzstudie und fünf Standard-Testfällen. Eine Methode zum Vergleich der Genauigkeit von Simulationen mit verschiedenen Verfeinerungsgebieten wird eingeführt, die ohne das Vorhandensein einer exakten Lösung auskommt. Im Wesentlichen geschieht dies durch den Vergleich von Eigenschaften der Lösung, die stark von der verwendeten räumlichen Auflösung abhängen. Im Fall einer aufsteigenden Warmluftblase ist der zusätzliche numerische Fehler durch die Verwendung der Adaptivität kleiner als 1% des gesamten numerischen Fehlers, wenn die adaptive Simulation mehr als 50% der Elemente einer uniformen hoch-aufgelösten Simulation verwendet. Entsprechend ist die adaptive Simulation fast doppelt so schnell wie die uniforme Simulation.
Resumo:
This work is focused on the analysis of sea–level change (last century), based mainly on instrumental observations. During this period, individual components of sea–level change are investigated, both at global and regional scales. Some of the geophysical processes responsible for current sea-level change such as glacial isostatic adjustments and current melting terrestrial ice sources, have been modeled and compared with observations. A new value of global mean sea level change based of tide gauges observations has been independently assessed in 1.5 mm/year, using corrections for glacial isostatic adjustment obtained with different models as a criterion for the tide gauge selection. The long wavelength spatial variability of the main components of sea–level change has been investigated by means of traditional and new spectral methods. Complex non–linear trends and abrupt sea–level variations shown by tide gauges records have been addressed applying different approaches to regional case studies. The Ensemble Empirical Mode Decomposition technique has been used to analyse tide gauges records from the Adriatic Sea to ascertain the existence of cyclic sea-level variations. An Early Warning approach have been adopted to detect tipping points in sea–level records of North East Pacific and their relationship with oceanic modes. Global sea–level projections to year 2100 have been obtained by a semi-empirical approach based on the artificial neural network method. In addition, a model-based approach has been applied to the case of the Mediterranean Sea, obtaining sea-level projection to year 2050.
Resumo:
In the first chapter we develop a theoretical model investigating food consumption and body weight with a novel assumption regarding human caloric expenditure (i.e. metabolism), in order to investigate why individuals can be rationally trapped in an excessive weight equilibrium and why they struggle to lose weight even when offered incentives for weight-loss. This assumption allows the theoretical model to have multiple equilibria and to provide an explanation for why losing weight is so difficult even in the presence of incentives, without relying on rational addiction, time-inconsistency preferences or bounded rationality. In addition to this result we are able to characterize under which circumstances a temporary incentive can create a persistent weight loss. In the second chapter we investigate the possible contributions that social norms and peer effects had on the spread of obesity. In recent literature peer effects and social norms have been characterized as important pathways for the biological and behavioral spread of body weight, along with decreased food prices and physical activity. We add to this literature by proposing a novel concept of social norm related to what we define as social distortion in weight perception. The theoretical model shows that, in equilibrium, the effect of an increase in peers' weight on i's weight is unrelated to health concerns while it is mainly associated with social concerns. Using regional data from England we prove that such social component is significant in influencing individual weight. In the last chapter we investigate the relationship between body weight and employment probability. Using a semi-parametric regression we show that men and women employment probability do not follow a linear relationship with body mass index (BMI) but rather an inverted U-shaped one, peaking at a BMI way over the clinical threshold for overweight.