852 resultados para H-Infinity Time-Varying Adaptive Algorithm
Resumo:
The topic of this thesis is the feedback stabilization of the attitude of magnetically actuated spacecraft. The use of magnetic coils is an attractive solution for the generation of control torques on small satellites flying inclined low Earth orbits, since magnetic control systems are characterized by reduced weight and cost, higher reliability, and require less power with respect to other kinds of actuators. At the same time, the possibility of smooth modulation of control torques reduces coupling of the attitude control system with flexible modes, thus preserving pointing precision with respect to the case when pulse-modulated thrusters are used. The principle based on the interaction between the Earth's magnetic field and the magnetic field generated by the set of coils introduces an inherent nonlinearity, because control torques can be delivered only in a plane that is orthogonal to the direction of the geomagnetic field vector. In other words, the system is underactuated, because the rotational degrees of freedom of the spacecraft, modeled as a rigid body, exceed the number of independent control actions. The solution of the control issue for underactuated spacecraft is also interesting in the case of actuator failure, e.g. after the loss of a reaction-wheel in a three-axes stabilized spacecraft with no redundancy. The application of well known control strategies is no longer possible in this case for both regulation and tracking, so that new methods have been suggested for tackling this particular problem. The main contribution of this thesis is to propose continuous time-varying controllers that globally stabilize the attitude of a spacecraft, when magneto-torquers alone are used and when a momentum-wheel supports magnetic control in order to overcome the inherent underactuation. A kinematic maneuver planning scheme, stability analyses, and detailed simulation results are also provided, with new theoretical developments and particular attention toward application considerations.
Resumo:
This thesis gives an overview of the history of gold per se, of gold as an investment good and offers some institutional details about gold and other precious metal markets. The goal of this study is to investigate the role of gold as a store of value and hedge against negative market movements in turbulent times. I investigate gold’s ability to act as a safe haven during periods of financial stress by employing instrumental variable techniques that allow for time varying conditional covariance. I find broad evidence supporting the view that gold acts as an anchor of stability during market downturns. During periods of high uncertainty and low stock market returns, gold tends to have higher than average excess returns. The effectiveness of gold as a safe haven is enhanced during periods of extreme crises: the largest peaks are observed during the global financial crises of 2007-2009 and, in particular, during the Lehman default (October 2008). A further goal of this thesis is to investigate whether gold provides protection from tail risk. I address the issue of asymmetric precious metal behavior conditioned to stock market performance and provide empirical evidence about the contribution of gold to a portfolio’s systematic skewness and kurtosis. I find that gold has positive coskewness with the market portfolio when the market is skewed to the left. Moreover, gold shows low cokurtosis with the market returns during volatile periods. I therefore show that gold is a desirable investment good to risk averse investors, since it tends to decrease the probability of experiencing extreme bad outcomes, and the magnitude of losses in case such events occur. Gold thus bears very important and under-researched characteristics as an asset class per se, which this thesis contributed to address and unveil.
Resumo:
This paper presents parallel recursive algorithms for the computation of the inverse discrete Legendre transform (DPT) and the inverse discrete Laguerre transform (IDLT). These recursive algorithms are derived using Clenshaw's recurrence formula, and they are implemented with a set of parallel digital filters with time-varying coefficients.
Resumo:
A general approach is presented for implementing discrete transforms as a set of first-order or second-order recursive digital filters. Clenshaw's recurrence formulae are used to formulate the second-order filters. The resulting structure is suitable for efficient implementation of discrete transforms in VLSI or FPGA circuits. The general approach is applied to the discrete Legendre transform as an illustration.
Resumo:
The variability of the Atlantic meridional overturing circulation (AMOC) strength is investigated in control experiments and in transient simulations of up to the last millennium using the low-resolution Community Climate System Model version 3. In the transient simulations the AMOC exhibits enhanced low-frequency variability that is mainly caused by infrequent transitions between two semi-stable circulation states which amount to a 10 percent change of the maximum overturning. One transition is also found in a control experiment, but the time-varying external forcing significantly increases the probability of the occurrence of such events though not having a direct, linear impact on the AMOC. The transition from a high to a low AMOC state starts with a reduction of the convection in the Labrador and Irminger Seas and goes along with a changed barotropic circulation of both gyres in the North Atlantic and a gradual strengthening of the convection in the Greenland-Iceland-Norwegian (GIN) Seas. In contrast, the transition from a weak to a strong overturning is induced by decreased mixing in the GIN Seas. As a consequence of the transition, regional sea surface temperature (SST) anomalies are found in the midlatitude North Atlantic and in the convection regions with an amplitude of up to 3 K. The atmospheric response to the SST forcing associated with the transition indicates a significant impact on the Scandinavian surface air temperature (SAT) in the order of 1 K. Thus, the changes of the ocean circulation make a major contribution to the Scandinavian SAT variability in the last millennium.
Resumo:
Using a highly resolved atmospheric general circulation model, the impact of different glacial boundary conditions on precipitation and atmospheric dynamics in the North Atlantic region is investigated. Six 30-yr time slice experiments of the Last Glacial Maximum at 21 thousand years before the present (ka BP) and of a less pronounced glacial state – the Middle Weichselian (65 ka BP) – are compared to analyse the sensitivity to changes in the ice sheet distribution, in the radiative forcing and in the prescribed time-varying sea surface temperature and sea ice, which are taken from a lower-resolved, but fully coupled atmosphere-ocean general circulation model. The strongest differences are found for simulations with different heights of the Laurentide ice sheet. A high surface elevation of the Laurentide ice sheet leads to a southward displacement of the jet stream and the storm track in the North Atlantic region. These changes in the atmospheric dynamics generate a band of increased precipitation in the mid-latitudes across the Atlantic to southern Europe in winter, while the precipitation pattern in summer is only marginally affected. The impact of the radiative forcing differences between the two glacial periods and of the prescribed time-varying sea surface temperatures and sea ice are of second order importance compared to the one of the Laurentide ice sheet. They affect the atmospheric dynamics and precipitation in a similar but less pronounced manner compared with the topographic changes.
Resumo:
This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.
Resumo:
Quantifying the health effects associated with simultaneous exposure to many air pollutants is now a research priority of the US EPA. Bayesian hierarchical models (BHM) have been extensively used in multisite time series studies of air pollution and health to estimate health effects of a single pollutant adjusted for potential confounding of other pollutants and other time-varying factors. However, when the scientific goal is to estimate the impacts of many pollutants jointly, a straightforward application of BHM is challenged by the need to specify a random-effect distribution on a high-dimensional vector of nuisance parameters, which often do not have an easy interpretation. In this paper we introduce a new BHM formulation, which we call "reduced BHM", aimed at analyzing clustered data sets in the presence of a large number of random effects that are not of primary scientific interest. At the first stage of the reduced BHM, we calculate the integrated likelihood of the parameter of interest (e.g. excess number of deaths attributed to simultaneous exposure to high levels of many pollutants). At the second stage, we specify a flexible random-effect distribution directly on the parameter of interest. The reduced BHM overcomes many of the challenges in the specification and implementation of full BHM in the context of a large number of nuisance parameters. In simulation studies we show that the reduced BHM performs comparably to the full BHM in many scenarios, and even performs better in some cases. Methods are applied to estimate location-specific and overall relative risks of cardiovascular hospital admissions associated with simultaneous exposure to elevated levels of particulate matter and ozone in 51 US counties during the period 1999-2005.
Resumo:
Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database for the period 1987--2000, which includes data for 100 U.S. cities. At the national level, a 10 micro-gram/m3 increase in PM(10) at lag 1 is associated with a 0.15 (95% posterior interval: -0.08, 0.39),0.14 (-0.14, 0.42), 0.36 (0.11, 0.61), and 0.14 (-0.06, 0.34) percent increase in mortality for winter, spring, summer, and fall, respectively. An analysis by geographical regions finds a strong seasonal pattern in the northeast (with a peak in summer) and little seasonal variation in the southern regions of the country. These results provide useful information for understanding particle toxicity and guiding future analyses of particle constituent data.
Resumo:
BACKGROUND: We evaluated the ability of CA15-3 and alkaline phosphatase (ALP) to predict breast cancer recurrence. PATIENTS AND METHODS: Data from seven International Breast Cancer Study Group trials were combined. The primary end point was relapse-free survival (RFS) (time from randomization to first breast cancer recurrence), and analyses included 3953 patients with one or more CA15-3 and ALP measurement during their RFS period. CA15-3 was considered abnormal if >30 U/ml or >50% higher than the first value recorded; ALP was recorded as normal, abnormal, or equivocal. Cox proportional hazards models with a time-varying indicator for abnormal CA15-3 and/or ALP were utilized. RESULTS: Overall, 784 patients (20%) had a recurrence, before which 274 (35%) had one or more abnormal CA15-3 and 35 (4%) had one or more abnormal ALP. Risk of recurrence increased by 30% for patients with abnormal CA15-3 [hazard ratio (HR) = 1.30; P = 0.0005], and by 4% for those with abnormal ALP (HR = 1.04; P = 0.82). Recurrence risk was greatest for patients with either (HR = 2.40; P < 0.0001) and with both (HR = 4.69; P < 0.0001) biomarkers abnormal. ALP better predicted liver recurrence. CONCLUSIONS: CA15-3 was better able to predict breast cancer recurrence than ALP, but use of both biomarkers together provided a better early indicator of recurrence. Whether routine use of these biomarkers improves overall survival remains an open question.
Resumo:
This doctoral thesis presents the experimental results along with a suitable synthesis with computational/theoretical results towards development of a reliable heat transfer correlation for a specific annular condensation flow regime inside a vertical tube. For fully condensing flows of pure vapor (FC-72) inside a vertical cylindrical tube of 6.6 mm diameter and 0.7 m length, the experimental measurements are shown to yield values of average heat transfer co-efficient, and approximate length of full condensation. The experimental conditions cover: mass flux G over a range of 2.9 kg/m2-s ≤ G ≤ 87.7 kg/m2-s, temperature difference ∆T (saturation temperature at the inlet pressure minus the mean condensing surface temperature) of 5 ºC to 45 ºC, and cases for which the length of full condensation xFC is in the range of 0 < xFC < 0.7 m. The range of flow conditions over which there is good agreement (within 15%) with the theory and its modeling assumptions has been identified. Additionally, the ranges of flow conditions for which there are significant discrepancies (between 15 -30% and greater than 30%) with theory have also been identified. The paper also refers to a brief set of key experimental results with regard to sensitivity of the flow to time-varying or quasi-steady (i.e. steady in the mean) impositions of pressure at both the inlet and the outlet. The experimental results support the updated theoretical/computational results that gravity dominated condensing flows do not allow such elliptic impositions.
Resumo:
Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.
Resumo:
This study investigated the effectiveness of incorporating several new instructional strategies into an International Baccalaureate (IB) chemistry course in terms of how they supported high school seniors’ understanding of electrochemistry. The three new methods used were (a) providing opportunities for visualization of particle movement by student manipulation of physical models and interactive computer simulations, (b) explicitly addressing common misconceptions identified in the literature, and (c) teaching an algorithmic, step-wise approach for determining the products of an aqueous solution electrolysis. Changes in student understanding were assessed through test scores on both internally and externally administered exams over a two-year period. It was found that visualization practice and explicit misconception instruction improved student understanding, but the effect was more apparent in the short-term. The data suggested that instruction time spent on algorithm practice was insufficient to cause significant test score improvement. There was, however, a substantial increase in the percentage of the experimental group students who chose to answer an optional electrochemistry-related external exam question, indicating an increase in student confidence. Implications for future instruction are discussed.
Resumo:
The use of conventional orifice-plate meter is typically restricted to measurements of steady flows. This study proposes a new and effective computational-experimental approach for measuring the time-varying (but steady-in-the-mean) nature of turbulent pulsatile gas flows. Low Mach number (effectively constant density) steady-in-the-mean gas flows with large amplitude fluctuations (whose highest significant frequency is characterized by the value fF) are termed pulsatile if the fluctuations have a direct correlation with the time-varying signature of the imposed dynamic pressure difference and, furthermore, they have fluctuation amplitudes that are significantly larger than those associated with turbulence or random acoustic wave signatures. The experimental aspect of the proposed calibration approach is based on use of Coriolis-meters (whose oscillating arm frequency fcoriolis >> fF) which are capable of effectively measuring the mean flow rate of the pulsatile flows. Together with the experimental measurements of the mean mass flow rate of these pulsatile flows, the computational approach presented here is shown to be effective in converting the dynamic pressure difference signal into the desired dynamic flow rate signal. The proposed approach is reliable because the time-varying flow rate predictions obtained for two different orifice-plate meters exhibit the approximately same qualitative, dominant features of the pulsatile flow.
Resumo:
This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.