808 resultados para Time varying networks
Resumo:
Using a highly resolved atmospheric general circulation model, the impact of different glacial boundary conditions on precipitation and atmospheric dynamics in the North Atlantic region is investigated. Six 30-yr time slice experiments of the Last Glacial Maximum at 21 thousand years before the present (ka BP) and of a less pronounced glacial state – the Middle Weichselian (65 ka BP) – are compared to analyse the sensitivity to changes in the ice sheet distribution, in the radiative forcing and in the prescribed time-varying sea surface temperature and sea ice, which are taken from a lower-resolved, but fully coupled atmosphere-ocean general circulation model. The strongest differences are found for simulations with different heights of the Laurentide ice sheet. A high surface elevation of the Laurentide ice sheet leads to a southward displacement of the jet stream and the storm track in the North Atlantic region. These changes in the atmospheric dynamics generate a band of increased precipitation in the mid-latitudes across the Atlantic to southern Europe in winter, while the precipitation pattern in summer is only marginally affected. The impact of the radiative forcing differences between the two glacial periods and of the prescribed time-varying sea surface temperatures and sea ice are of second order importance compared to the one of the Laurentide ice sheet. They affect the atmospheric dynamics and precipitation in a similar but less pronounced manner compared with the topographic changes.
Resumo:
This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.
Resumo:
Quantifying the health effects associated with simultaneous exposure to many air pollutants is now a research priority of the US EPA. Bayesian hierarchical models (BHM) have been extensively used in multisite time series studies of air pollution and health to estimate health effects of a single pollutant adjusted for potential confounding of other pollutants and other time-varying factors. However, when the scientific goal is to estimate the impacts of many pollutants jointly, a straightforward application of BHM is challenged by the need to specify a random-effect distribution on a high-dimensional vector of nuisance parameters, which often do not have an easy interpretation. In this paper we introduce a new BHM formulation, which we call "reduced BHM", aimed at analyzing clustered data sets in the presence of a large number of random effects that are not of primary scientific interest. At the first stage of the reduced BHM, we calculate the integrated likelihood of the parameter of interest (e.g. excess number of deaths attributed to simultaneous exposure to high levels of many pollutants). At the second stage, we specify a flexible random-effect distribution directly on the parameter of interest. The reduced BHM overcomes many of the challenges in the specification and implementation of full BHM in the context of a large number of nuisance parameters. In simulation studies we show that the reduced BHM performs comparably to the full BHM in many scenarios, and even performs better in some cases. Methods are applied to estimate location-specific and overall relative risks of cardiovascular hospital admissions associated with simultaneous exposure to elevated levels of particulate matter and ozone in 51 US counties during the period 1999-2005.
Resumo:
Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database for the period 1987--2000, which includes data for 100 U.S. cities. At the national level, a 10 micro-gram/m3 increase in PM(10) at lag 1 is associated with a 0.15 (95% posterior interval: -0.08, 0.39),0.14 (-0.14, 0.42), 0.36 (0.11, 0.61), and 0.14 (-0.06, 0.34) percent increase in mortality for winter, spring, summer, and fall, respectively. An analysis by geographical regions finds a strong seasonal pattern in the northeast (with a peak in summer) and little seasonal variation in the southern regions of the country. These results provide useful information for understanding particle toxicity and guiding future analyses of particle constituent data.
Resumo:
BACKGROUND: We evaluated the ability of CA15-3 and alkaline phosphatase (ALP) to predict breast cancer recurrence. PATIENTS AND METHODS: Data from seven International Breast Cancer Study Group trials were combined. The primary end point was relapse-free survival (RFS) (time from randomization to first breast cancer recurrence), and analyses included 3953 patients with one or more CA15-3 and ALP measurement during their RFS period. CA15-3 was considered abnormal if >30 U/ml or >50% higher than the first value recorded; ALP was recorded as normal, abnormal, or equivocal. Cox proportional hazards models with a time-varying indicator for abnormal CA15-3 and/or ALP were utilized. RESULTS: Overall, 784 patients (20%) had a recurrence, before which 274 (35%) had one or more abnormal CA15-3 and 35 (4%) had one or more abnormal ALP. Risk of recurrence increased by 30% for patients with abnormal CA15-3 [hazard ratio (HR) = 1.30; P = 0.0005], and by 4% for those with abnormal ALP (HR = 1.04; P = 0.82). Recurrence risk was greatest for patients with either (HR = 2.40; P < 0.0001) and with both (HR = 4.69; P < 0.0001) biomarkers abnormal. ALP better predicted liver recurrence. CONCLUSIONS: CA15-3 was better able to predict breast cancer recurrence than ALP, but use of both biomarkers together provided a better early indicator of recurrence. Whether routine use of these biomarkers improves overall survival remains an open question.
Resumo:
This doctoral thesis presents the experimental results along with a suitable synthesis with computational/theoretical results towards development of a reliable heat transfer correlation for a specific annular condensation flow regime inside a vertical tube. For fully condensing flows of pure vapor (FC-72) inside a vertical cylindrical tube of 6.6 mm diameter and 0.7 m length, the experimental measurements are shown to yield values of average heat transfer co-efficient, and approximate length of full condensation. The experimental conditions cover: mass flux G over a range of 2.9 kg/m2-s ≤ G ≤ 87.7 kg/m2-s, temperature difference ∆T (saturation temperature at the inlet pressure minus the mean condensing surface temperature) of 5 ºC to 45 ºC, and cases for which the length of full condensation xFC is in the range of 0 < xFC < 0.7 m. The range of flow conditions over which there is good agreement (within 15%) with the theory and its modeling assumptions has been identified. Additionally, the ranges of flow conditions for which there are significant discrepancies (between 15 -30% and greater than 30%) with theory have also been identified. The paper also refers to a brief set of key experimental results with regard to sensitivity of the flow to time-varying or quasi-steady (i.e. steady in the mean) impositions of pressure at both the inlet and the outlet. The experimental results support the updated theoretical/computational results that gravity dominated condensing flows do not allow such elliptic impositions.
Resumo:
The use of conventional orifice-plate meter is typically restricted to measurements of steady flows. This study proposes a new and effective computational-experimental approach for measuring the time-varying (but steady-in-the-mean) nature of turbulent pulsatile gas flows. Low Mach number (effectively constant density) steady-in-the-mean gas flows with large amplitude fluctuations (whose highest significant frequency is characterized by the value fF) are termed pulsatile if the fluctuations have a direct correlation with the time-varying signature of the imposed dynamic pressure difference and, furthermore, they have fluctuation amplitudes that are significantly larger than those associated with turbulence or random acoustic wave signatures. The experimental aspect of the proposed calibration approach is based on use of Coriolis-meters (whose oscillating arm frequency fcoriolis >> fF) which are capable of effectively measuring the mean flow rate of the pulsatile flows. Together with the experimental measurements of the mean mass flow rate of these pulsatile flows, the computational approach presented here is shown to be effective in converting the dynamic pressure difference signal into the desired dynamic flow rate signal. The proposed approach is reliable because the time-varying flow rate predictions obtained for two different orifice-plate meters exhibit the approximately same qualitative, dominant features of the pulsatile flow.
Resumo:
Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.
Resumo:
This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.
Resumo:
This report presents a study on the problem of spacecraft attitude control using magnetic actuators. Several existing approaches are reviewed and one control strategy is implemented and simulated. A time-varying feedback control law achieving inertial pointing for magnetically actuated spacecraft is implemented. The report explains the modeling of the spacecraft rigid body dynamics, kinematics and attitude control in detail. Besides the fact that control laws have been established for stabilization around local equilibrium, this report presents the results of a control law that yields a generic, global solution for attitude stabilization of a magnetically actuated spacecraft. The report also involves the use MATLAB as a tool for both modeling and simulation of the spacecraft and controller. In conclusion, the simulation outlines the performance of the controller in independently stabilizing the spacecraft in three mutually perpendicular directions.
Resumo:
The general model The aim of this chapter is to introduce a structured overview of the different possibilities available to display and analyze brain electric scalp potentials. First, a general formal model of time-varying distributed EEG potentials is introduced. Based on this model, the most common analysis strategies used in EEG research are introduced and discussed as specific cases of this general model. Both the general model and particular methods are also expressed in mathematical terms. It is however not necessary to understand these terms to understand the chapter. The general model that we propose here is based on the statement made in Chapter 3, stating that the electric field produced by active neurons in the brain propagates in brain tissue without delay in time. Contrary to other imaging methods that are based on hemodynamic or metabolic processes, the EEG scalp potentials are thus “real-time,” not delayed and not a-priori frequency-filtered measurements. If only a single dipolar source in the brain were active, the temporal dynamics of the activity of that source would be exactly reproduced by the temporal dynamics observed in the scalp potentials produced by that source. This is illustrated in Figure 5.1, where the expected EEG signal of a single source with spindle-like dynamics in time has been computed. The dynamics of the scalp potentials exactly reproduce the dynamics of the source. The amplitude of the measured potentials depends on the relation between the location and orientation of the active source, its strength and the electrode position.
Resumo:
Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost.
Resumo:
Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.
Resumo:
We explore the macroeconomic effects of a compression in the long-term bond yield spread within the context of the Great Recession of 2007–09 via a time-varying parameter structural VAR model. We identify a “pure” spread shock defined as a shock that leaves the policy rate unchanged, which allows us to characterize the macroeconomic consequences of a decline in the yield spread induced by central banks’ asset purchases within an environment in which the policy rate is constrained by the effective zero lower bound. Two key findings stand out. First, compressions in the long-term yield spread exert a powerful effect on both output growth and inflation. Second, conditional on available estimates of the impact of the Federal Reserve’s and the Bank of England’s asset purchase programs on long-term yield spreads, our counterfactual simulations suggest that U.S. and U.K. unconventional monetary policy actions have averted significant risks both of deflation and of output collapses comparable to those that took place during the Great Depression.
Resumo:
OBJECTIVE To examine the degree to which use of β blockers, statins, and diuretics in patients with impaired glucose tolerance and other cardiovascular risk factors is associated with new onset diabetes. DESIGN Reanalysis of data from the Nateglinide and Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) trial. SETTING NAVIGATOR trial. PARTICIPANTS Patients who at baseline (enrolment) were treatment naïve to β blockers (n=5640), diuretics (n=6346), statins (n=6146), and calcium channel blockers (n=6294). Use of calcium channel blocker was used as a metabolically neutral control. MAIN OUTCOME MEASURES Development of new onset diabetes diagnosed by standard plasma glucose level in all participants and confirmed with glucose tolerance testing within 12 weeks after the increased glucose value was recorded. The relation between each treatment and new onset diabetes was evaluated using marginal structural models for causal inference, to account for time dependent confounding in treatment assignment. RESULTS During the median five years of follow-up, β blockers were started in 915 (16.2%) patients, diuretics in 1316 (20.7%), statins in 1353 (22.0%), and calcium channel blockers in 1171 (18.6%). After adjusting for baseline characteristics and time varying confounders, diuretics and statins were both associated with an increased risk of new onset diabetes (hazard ratio 1.23, 95% confidence interval 1.06 to 1.44, and 1.32, 1.14 to 1.48, respectively), whereas β blockers and calcium channel blockers were not associated with new onset diabetes (1.10, 0.92 to 1.31, and 0.95, 0.79 to 1.13, respectively). CONCLUSIONS Among people with impaired glucose tolerance and other cardiovascular risk factors and with serial glucose measurements, diuretics and statins were associated with an increased risk of new onset diabetes, whereas the effect of β blockers was non-significant.