879 resultados para time varying parameter model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis gives an overview of the history of gold per se, of gold as an investment good and offers some institutional details about gold and other precious metal markets. The goal of this study is to investigate the role of gold as a store of value and hedge against negative market movements in turbulent times. I investigate gold’s ability to act as a safe haven during periods of financial stress by employing instrumental variable techniques that allow for time varying conditional covariance. I find broad evidence supporting the view that gold acts as an anchor of stability during market downturns. During periods of high uncertainty and low stock market returns, gold tends to have higher than average excess returns. The effectiveness of gold as a safe haven is enhanced during periods of extreme crises: the largest peaks are observed during the global financial crises of 2007-2009 and, in particular, during the Lehman default (October 2008). A further goal of this thesis is to investigate whether gold provides protection from tail risk. I address the issue of asymmetric precious metal behavior conditioned to stock market performance and provide empirical evidence about the contribution of gold to a portfolio’s systematic skewness and kurtosis. I find that gold has positive coskewness with the market portfolio when the market is skewed to the left. Moreover, gold shows low cokurtosis with the market returns during volatile periods. I therefore show that gold is a desirable investment good to risk averse investors, since it tends to decrease the probability of experiencing extreme bad outcomes, and the magnitude of losses in case such events occur. Gold thus bears very important and under-researched characteristics as an asset class per se, which this thesis contributed to address and unveil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present doctoral thesis is structured as a collection of three essays. The first essay, “SOC(HE)-Italy: a classification for graduate occupations” presents the conceptual basis, the construction, the validation and the application to the Italian labour force of the occupational classification termed SOC(HE)-Italy. I have developed this classification under the supervision of Kate Purcell during my period as a visiting research student at the Warwick Institute for Emplyment Research. This classification links the constituent tasks and duties of a particular job to the relevant knowledge and skills imparted via Higher Education (HE). It is based onto the SOC(HE)2010, an occupational classification first proposed by Kate Purcell in 2013, but differently constructed. In the second essay “Assessing the incidence and wage effects of overeducation among Italian graduates using a new measure for educational requirements” I utilize this classification to build a valid and reliable measure for job requirements. The lack of an unbiased measure for this dimension constitutes one of the major constraints to achieve a generally accepted measurement of overeducation. Estimations of overeducation incidence and wage effects are run onto AlmaLaurea data from the survey on graduates career paths. I have written this essay and obtained these estimates benefiting of the help and guidance of Giovanni Guidetti and Giulio Pedrini. The third and last essay titled “Overeducation in the Italian labour market: clarifying the concepts and addressing the measurement error problem” addresses a number of theoretical issues concerning the concepts of educational mismatch and overeducation. Using Istat data from RCFL survey I run estimates of the ORU model for the whole Italian labour force. In my knowledge, this is the first time ever such model is estimated on such population. In addition, I adopt the new measure of overeducation based onto the SOC(HE)-Italy classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of next generation microwave technology for backhauling systems is driven by an increasing capacity demand. In order to provide higher data rates and throughputs over a point-to-point link, a cost-effective performance improvement is enabled by an enhanced energy-efficiency of the transmit power amplification stage, whereas a combination of spectrally efficient modulation formats and wider bandwidths is supported by amplifiers that fulfil strict constraints in terms of linearity. An optimal trade-off between these conflicting requirements can be achieved by resorting to flexible digital signal processing techniques at baseband. In such a scenario, the adaptive digital pre-distortion is a well-known linearization method, that comes up to be a potentially widely-used solution since it can be easily integrated into base stations. Its operation can effectively compensate for the inter-modulation distortion introduced by the power amplifier, keeping up with the frequency-dependent time-varying behaviour of the relative nonlinear characteristic. In particular, the impact of the memory effects become more relevant and their equalisation become more challenging as the input discrete signal feature a wider bandwidth and a faster envelope to pre-distort. This thesis project involves the research, design and simulation a pre-distorter implementation at RTL based on a novel polyphase architecture, which makes it capable of operating over very wideband signals at a sampling rate that complies with the actual available clock speed of current digital devices. The motivation behind this structure is to carry out a feasible pre-distortion for the multi-band spectrally efficient complex signals carrying multiple channels that are going to be transmitted in near future high capacity and reliability microwave backhaul links.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diese Arbeit befasst sich mit der photoinduzierten Erzeugung neutraler Pionen, sehr nahe an der Schwellenenergie. Dabei werden zwei Ziele verfolgt: Zum einen die Überprüfung von Vorhersagen dieser effektiven Theorien und Modelle, zum anderen werden hier erstmals alle relevanten Partialwellenamplituden modellunabhängig aus gemessenen Observablen bestimmt. Diese Methode soll in Zukunft auch bei höheren Energien im Bereich der Nukleonresonanzen Anwendung finden. rnrnKonkret wird die Durchführung und Analyse eines Experiments vorgestellt, welches am Mainzer Mikrotron (MAMI) in den Jahren 2010 bis 2013 mit zirkular polarisiertem Photonenstrahl stattfand. Der Photonenstrahl wurde an einer Anlage zur Erzeugung energiemarkierter Bremsstrahlung aus dem Elektronenstrahl von MAMI gewonnen. Zum Nachweis der Reaktionsprodukte diente das hermetische 4pi CB/TAPS-Detektorsystem. Erstmalig standen bei derartigen Messungen auch transversal polarisierte Protonen zur Verfügung. Dazu wird Butanol in einer speziellen Apparatur dynamisch polarisiert. Molekularer Wasserstoff lässt sich aufgrund der para-Konfiguration nicht polarisieren. Wegen der Verwendung von Butanol als Targetmaterial, bei dem weniger als 5% aller erzeugten Pionen an polarisierten Protonen produziert wurden, ist die Behandlung des Untergrunds eine zentrale Aufgabe. rnrnEs werden zwei Methoden der Untergrundseparation vorgestellt, wovon die bessere in der Analyse angewendet wurde. Abschließend findet eine ausführliche Bewertung systematischer Fehler statt.rnrnDie erstmalige Verwendung transversal polarisierter Protonen ermöglicht den Zugang zu bisher nicht gemessenen Spin"=Freiheitsgraden. In Kombination mit einem komplementären Vorläufer-Experiment aus dem Jahr 2008 mit linear polarisiertem Photonenstrahl konnten aus den gewonnenen Daten erstmals alle komplexen s- und p-Partialwellenamplituden modellunabhängig bestimmt werden. rnrnDarüber hinaus wurden im Rahmen dieser Arbeit wesentliche Verbesserungen am apparativen Aufbau erzielt. Beispiele sind ein Elektronenstrahl-Polarimeter, ein zellularer CB-Multiplizitätstrigger, sowie signifikante Verbesserungen der Datennahmeelektronik und des Triggersystems, die teilweise in dieser Arbeit vorgestellt werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents parallel recursive algorithms for the computation of the inverse discrete Legendre transform (DPT) and the inverse discrete Laguerre transform (IDLT). These recursive algorithms are derived using Clenshaw's recurrence formula, and they are implemented with a set of parallel digital filters with time-varying coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A general approach is presented for implementing discrete transforms as a set of first-order or second-order recursive digital filters. Clenshaw's recurrence formulae are used to formulate the second-order filters. The resulting structure is suitable for efficient implementation of discrete transforms in VLSI or FPGA circuits. The general approach is applied to the discrete Legendre transform as an illustration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database for the period 1987--2000, which includes data for 100 U.S. cities. At the national level, a 10 micro-gram/m3 increase in PM(10) at lag 1 is associated with a 0.15 (95% posterior interval: -0.08, 0.39),0.14 (-0.14, 0.42), 0.36 (0.11, 0.61), and 0.14 (-0.06, 0.34) percent increase in mortality for winter, spring, summer, and fall, respectively. An analysis by geographical regions finds a strong seasonal pattern in the northeast (with a peak in summer) and little seasonal variation in the southern regions of the country. These results provide useful information for understanding particle toxicity and guiding future analyses of particle constituent data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: We evaluated the ability of CA15-3 and alkaline phosphatase (ALP) to predict breast cancer recurrence. PATIENTS AND METHODS: Data from seven International Breast Cancer Study Group trials were combined. The primary end point was relapse-free survival (RFS) (time from randomization to first breast cancer recurrence), and analyses included 3953 patients with one or more CA15-3 and ALP measurement during their RFS period. CA15-3 was considered abnormal if >30 U/ml or >50% higher than the first value recorded; ALP was recorded as normal, abnormal, or equivocal. Cox proportional hazards models with a time-varying indicator for abnormal CA15-3 and/or ALP were utilized. RESULTS: Overall, 784 patients (20%) had a recurrence, before which 274 (35%) had one or more abnormal CA15-3 and 35 (4%) had one or more abnormal ALP. Risk of recurrence increased by 30% for patients with abnormal CA15-3 [hazard ratio (HR) = 1.30; P = 0.0005], and by 4% for those with abnormal ALP (HR = 1.04; P = 0.82). Recurrence risk was greatest for patients with either (HR = 2.40; P < 0.0001) and with both (HR = 4.69; P < 0.0001) biomarkers abnormal. ALP better predicted liver recurrence. CONCLUSIONS: CA15-3 was better able to predict breast cancer recurrence than ALP, but use of both biomarkers together provided a better early indicator of recurrence. Whether routine use of these biomarkers improves overall survival remains an open question.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This doctoral thesis presents the experimental results along with a suitable synthesis with computational/theoretical results towards development of a reliable heat transfer correlation for a specific annular condensation flow regime inside a vertical tube. For fully condensing flows of pure vapor (FC-72) inside a vertical cylindrical tube of 6.6 mm diameter and 0.7 m length, the experimental measurements are shown to yield values of average heat transfer co-efficient, and approximate length of full condensation. The experimental conditions cover: mass flux G over a range of 2.9 kg/m2-s ≤ G ≤ 87.7 kg/m2-s, temperature difference ∆T (saturation temperature at the inlet pressure minus the mean condensing surface temperature) of 5 ºC to 45 ºC, and cases for which the length of full condensation xFC is in the range of 0 < xFC < 0.7 m. The range of flow conditions over which there is good agreement (within 15%) with the theory and its modeling assumptions has been identified. Additionally, the ranges of flow conditions for which there are significant discrepancies (between 15 -30% and greater than 30%) with theory have also been identified. The paper also refers to a brief set of key experimental results with regard to sensitivity of the flow to time-varying or quasi-steady (i.e. steady in the mean) impositions of pressure at both the inlet and the outlet. The experimental results support the updated theoretical/computational results that gravity dominated condensing flows do not allow such elliptic impositions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of conventional orifice-plate meter is typically restricted to measurements of steady flows. This study proposes a new and effective computational-experimental approach for measuring the time-varying (but steady-in-the-mean) nature of turbulent pulsatile gas flows. Low Mach number (effectively constant density) steady-in-the-mean gas flows with large amplitude fluctuations (whose highest significant frequency is characterized by the value fF) are termed pulsatile if the fluctuations have a direct correlation with the time-varying signature of the imposed dynamic pressure difference and, furthermore, they have fluctuation amplitudes that are significantly larger than those associated with turbulence or random acoustic wave signatures. The experimental aspect of the proposed calibration approach is based on use of Coriolis-meters (whose oscillating arm frequency fcoriolis >> fF) which are capable of effectively measuring the mean flow rate of the pulsatile flows. Together with the experimental measurements of the mean mass flow rate of these pulsatile flows, the computational approach presented here is shown to be effective in converting the dynamic pressure difference signal into the desired dynamic flow rate signal. The proposed approach is reliable because the time-varying flow rate predictions obtained for two different orifice-plate meters exhibit the approximately same qualitative, dominant features of the pulsatile flow.