879 resultados para time varying parameter model
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
The real-time optimization of large-scale systems is a difficult problem due to the need for complex models involving uncertain parameters and the high computational cost of solving such problems by a decentralized approach. Extremum-seeking control (ESC) is a model-free real-time optimization technique which can estimate unknown parameters and can optimize nonlinear time-varying systems using only a measurement of the cost function to be minimized. In this thesis, we develop a distributed version of extremum-seeking control which allows large-scale systems to be optimized without models and with minimal computing power. First, we develop a continuous-time distributed extremum-seeking controller. It has three main components: consensus, parameter estimation, and optimization. The consensus provides each local controller with an estimate of the cost to be minimized, allowing them to coordinate their actions. Using this cost estimate, parameters for a local input-output model are estimated, and the cost is minimized by following a gradient descent based on the estimate of the gradient. Next, a similar distributed extremum-seeking controller is developed in discrete-time. Finally, we consider an interesting application of distributed ESC: formation control of high-altitude balloons for high-speed wireless internet. These balloons must be steered into a favourable formation where they are spread out over the Earth and provide coverage to the entire planet. Distributed ESC is applied to this problem, and is shown to be effective for a system of 1200 ballons subjected to realistic wind currents. The approach does not require a wind model and uses a cost function based on a Voronoi partition of the sphere. Distributed ESC is able to steer balloons from a few initial launch sites into a formation which provides coverage to the entire Earth and can maintain a similar formation as the balloons move with the wind around the Earth.
Resumo:
Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.
Resumo:
This thesis studies the field of asset price bubbles. It is comprised of three independent chapters. Each of these chapters either directly or indirectly analyse the existence or implications of asset price bubbles. The type of bubbles assumed in each of these chapters is consistent with rational expectations. Thus, the kind of price bubbles investigated here are known as rational bubbles in the literature. The following describes the three chapters. Chapter 1: This chapter attempts to explain the recent US housing price bubble by developing a heterogeneous agent endowment economy asset pricing model with risky housing, endogenous collateral and defaults. Investment in housing is subject to an idiosyncratic risk and some mortgages are defaulted in equilibrium. We analytically derive the leverage or the endogenous loan to value ratio. This variable comes from a limited participation constraint in a one period mortgage contract with monitoring costs. Our results show that low values of housing investment risk produces a credit easing effect encouraging excess leverage and generates credit driven rational price bubbles in the housing good. Conversely, high values of housing investment risk produces a credit crunch characterized by tight borrowing constraints, low leverage and low house prices. Furthermore, the leverage ratio was found to be procyclical and the rate of defaults countercyclical consistent with empirical evidence. Chapter 2: It is widely believed that financial assets have considerable persistence and are susceptible to bubbles. However, identification of this persistence and potential bubbles is not straightforward. This chapter tests for price bubbles in the United States housing market accounting for long memory and structural breaks. The intuition is that the presence of long memory negates price bubbles while the presence of breaks could artificially induce bubble behaviour. Hence, we use procedures namely semi-parametric Whittle and parametric ARFIMA procedures that are consistent for a variety of residual biases to estimate the value of the long memory parameter, d, of the log rent-price ratio. We find that the semi-parametric estimation procedures robust to non-normality and heteroskedasticity errors found far more bubble regions than parametric ones. A structural break was identified in the mean and trend of all the series which when accounted for removed bubble behaviour in a number of regions. Importantly, the United States housing market showed evidence for rational bubbles at both the aggregate and regional levels. In the third and final chapter, we attempt to answer the following question: To what extend should individuals participate in the stock market and hold risky assets over their lifecycle? We answer this question by employing a lifecycle consumption-portfolio choice model with housing, labour income and time varying predictable returns where the agents are constrained in the level of their borrowing. We first analytically characterize and then numerically solve for the optimal asset allocation on the risky asset comparing the return predictability case with that of IID returns. We successfully resolve the puzzles and find equity holding and participation rates close to the data. We also find that return predictability substantially alter both the level of risky portfolio allocation and the rate of stock market participation. High factor (dividend-price ratio) realization and high persistence of factor process indicative of stock market bubbles raise the amount of wealth invested in risky assets and the level of stock market participation, respectively. Conversely, rare disasters were found to bring down these rates, the change being severe for investors in the later years of the life-cycle. Furthermore, investors following time varying returns (return predictability) hedged background risks significantly better than the IID ones.
Resumo:
The objective of this study was to gain an understanding of the effects of population heterogeneity, missing data, and causal relationships on parameter estimates from statistical models when analyzing change in medication use. From a public health perspective, two timely topics were addressed: the use and effects of statins in populations in primary prevention of cardiovascular disease and polypharmacy in older population. Growth mixture models were applied to characterize the accumulation of cardiovascular and diabetes medications among apparently healthy population of statin initiators. The causal effect of statin adherence on the incidence of acute cardiovascular events was estimated using marginal structural models in comparison with discrete-time hazards models. The impact of missing data on the growth estimates of evolution of polypharmacy was examined comparing statistical models under different assumptions for missing data mechanism. The data came from Finnish administrative registers and from the population-based Geriatric Multidisciplinary Strategy for the Good Care of the Elderly study conducted in Kuopio, Finland, during 2004–07. Five distinct patterns of accumulating medications emerged among the population of apparently healthy statin initiators during two years after statin initiation. Proper accounting for time-varying dependencies between adherence to statins and confounders using marginal structural models produced comparable estimation results with those from a discrete-time hazards model. Missing data mechanism was shown to be a key component when estimating the evolution of polypharmacy among older persons. In conclusion, population heterogeneity, missing data and causal relationships are important aspects in longitudinal studies that associate with the study question and should be critically assessed when performing statistical analyses. Analyses should be supplemented with sensitivity analyses towards model assumptions.
Resumo:
The long-term adverse effects on health associated with air pollution exposure can be estimated using either cohort or spatio-temporal ecological designs. In a cohort study, the health status of a cohort of people are assessed periodically over a number of years, and then related to estimated ambient pollution concentrations in the cities in which they live. However, such cohort studies are expensive and time consuming to implement, due to the long-term follow up required for the cohort. Therefore, spatio-temporal ecological studies are also being used to estimate the long-term health effects of air pollution as they are easy to implement due to the routine availability of the required data. Spatio-temporal ecological studies estimate the health impact of air pollution by utilising geographical and temporal contrasts in air pollution and disease risk across $n$ contiguous small-areas, such as census tracts or electoral wards, for multiple time periods. The disease data are counts of the numbers of disease cases occurring in each areal unit and time period, and thus Poisson log-linear models are typically used for the analysis. The linear predictor includes pollutant concentrations and known confounders such as socio-economic deprivation. However, as the disease data typically contain residual spatial or spatio-temporal autocorrelation after the covariate effects have been accounted for, these known covariates are augmented by a set of random effects. One key problem in these studies is estimating spatially representative pollution concentrations in each areal which are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over modelled concentrations (grid level) from an atmospheric dispersion model. The aim of this thesis is to investigate the health effects of long-term exposure to Nitrogen Dioxide (NO2) and Particular matter (PM10) in mainland Scotland, UK. In order to have an initial impression about the air pollution health effects in mainland Scotland, chapter 3 presents a standard epidemiological study using a benchmark method. The remaining main chapters (4, 5, 6) cover the main methodological focus in this thesis which has been threefold: (i) how to better estimate pollution by developing a multivariate spatio-temporal fusion model that relates monitored and modelled pollution data over space, time and pollutant; (ii) how to simultaneously estimate the joint effects of multiple pollutants; and (iii) how to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. Specifically, chapters 4 and 5 are developed to achieve (i), while chapter 6 focuses on (ii) and (iii). In chapter 4, I propose an integrated model for estimating the long-term health effects of NO2, that fuses modelled and measured pollution data to provide improved predictions of areal level pollution concentrations and hence health effects. The air pollution fusion model proposed is a Bayesian space-time linear regression model for relating the measured concentrations to the modelled concentrations for a single pollutant, whilst allowing for additional covariate information such as site type (e.g. roadside, rural, etc) and temperature. However, it is known that some pollutants might be correlated because they may be generated by common processes or be driven by similar factors such as meteorology. The correlation between pollutants can help to predict one pollutant by borrowing strength from the others. Therefore, in chapter 5, I propose a multi-pollutant model which is a multivariate spatio-temporal fusion model that extends the single pollutant model in chapter 4, which relates monitored and modelled pollution data over space, time and pollutant to predict pollution across mainland Scotland. Considering that we are exposed to multiple pollutants simultaneously because the air we breathe contains a complex mixture of particle and gas phase pollutants, the health effects of exposure to multiple pollutants have been investigated in chapter 6. Therefore, this is a natural extension to the single pollutant health effects in chapter 4. Given NO2 and PM10 are highly correlated (multicollinearity issue) in my data, I first propose a temporally-varying linear model to regress one pollutant (e.g. NO2) against another (e.g. PM10) and then use the residuals in the disease model as well as PM10, thus investigating the health effects of exposure to both pollutants simultaneously. Another issue considered in chapter 6 is to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. There are in total four approaches being developed to adjust the exposure uncertainty. Finally, chapter 7 summarises the work contained within this thesis and discusses the implications for future research.
Resumo:
The idea of spacecraft formations, flying in tight configurations with maximum baselines of a few hundred meters in low-Earth orbits, has generated widespread interest over the last several years. Nevertheless, controlling the movement of spacecraft in formation poses difficulties, such as in-orbit high-computing demand and collision avoidance capabilities, which escalate as the number of units in the formation is increased and complicated nonlinear effects are imposed to the dynamics, together with uncertainty which may arise from the lack of knowledge of system parameters. These requirements have led to the need of reliable linear and nonlinear controllers in terms of relative and absolute dynamics. The objective of this thesis is, therefore, to introduce new control methods to allow spacecraft in formation, with circular/elliptical reference orbits, to efficiently execute safe autonomous manoeuvres. These controllers distinguish from the bulk of literature in that they merge guidance laws never applied before to spacecraft formation flying and collision avoidance capacities into a single control strategy. For this purpose, three control schemes are presented: linear optimal regulation, linear optimal estimation and adaptive nonlinear control. In general terms, the proposed control approaches command the dynamical performance of one or several followers with respect to a leader to asymptotically track a time-varying nominal trajectory (TVNT), while the threat of collision between the followers is reduced by repelling accelerations obtained from the collision avoidance scheme during the periods of closest proximity. Linear optimal regulation is achieved through a Riccati-based tracking controller. Within this control strategy, the controller provides guidance and tracking toward a desired TVNT, optimizing fuel consumption by Riccati procedure using a non-infinite cost function defined in terms of the desired TVNT, while repelling accelerations generated from the CAS will ensure evasive actions between the elements of the formation. The relative dynamics model, suitable for circular and eccentric low-Earth reference orbits, is based on the Tschauner and Hempel equations, and includes a control input and a nonlinear term corresponding to the CAS repelling accelerations. Linear optimal estimation is built on the forward-in-time separation principle. This controller encompasses two stages: regulation and estimation. The first stage requires the design of a full state feedback controller using the state vector reconstructed by means of the estimator. The second stage requires the design of an additional dynamical system, the estimator, to obtain the states which cannot be measured in order to approximately reconstruct the full state vector. Then, the separation principle states that an observer built for a known input can also be used to estimate the state of the system and to generate the control input. This allows the design of the observer and the feedback independently, by exploiting the advantages of linear quadratic regulator theory, in order to estimate the states of a dynamical system with model and sensor uncertainty. The relative dynamics is described with the linear system used in the previous controller, with a control input and nonlinearities entering via the repelling accelerations from the CAS during collision avoidance events. Moreover, sensor uncertainty is added to the control process by considering carrier-phase differential GPS (CDGPS) velocity measurement error. An adaptive control law capable of delivering superior closed-loop performance when compared to the certainty-equivalence (CE) adaptive controllers is finally presented. A novel noncertainty-equivalence controller based on the Immersion and Invariance paradigm for close-manoeuvring spacecraft formation flying in both circular and elliptical low-Earth reference orbits is introduced. The proposed control scheme achieves stabilization by immersing the plant dynamics into a target dynamical system (or manifold) that captures the desired dynamical behaviour. They key feature of this methodology is the addition of a new term to the classical certainty-equivalence control approach that, in conjunction with the parameter update law, is designed to achieve adaptive stabilization. This parameter has the ultimate task of shaping the manifold into which the adaptive system is immersed. The performance of the controller is proven stable via a Lyapunov-based analysis and Barbalat’s lemma. In order to evaluate the design of the controllers, test cases based on the physical and orbital features of the Prototype Research Instruments and Space Mission Technology Advancement (PRISMA) are implemented, extending the number of elements in the formation into scenarios with reconfigurations and on-orbit position switching in elliptical low-Earth reference orbits. An extensive analysis and comparison of the performance of the controllers in terms of total Δv and fuel consumption, with and without the effects of the CAS, is presented. These results show that the three proposed controllers allow the followers to asymptotically track the desired nominal trajectory and, additionally, those simulations including CAS show an effective decrease of collision risk during the performance of the manoeuvre.
Resumo:
In this paper it is proposed to obtain enhanced and more efficient parameters model from generalized five parameters (single diode) model of PV cells. The paper also introduces, describes and implements a seven parameter model for photovoltaic cell (PV cell) which includes two internal parameters and five external parameters. To obtain the model the mathematical equations and an equivalent circuit consisting of a photo generated current source, a series resistor, a shunt resistor and a diode is used. The fundamental equation of PV cell is used to analyse and best fit the observation data. Especially bisection iteration method is used to obtain the expected result and to understand the deviation of changes in different parameters situation at various conditions respectively. The produced model can be used of measuring and understanding the actions of photovoltaic cells for certain changes and parameters extraction. The effect is also studied with I-V and P-V characteristics of PV cells though it is a challenge to optimize the output with real time simulation. The working procedure is also discussed and an experiment presented to get the closure and insight about the produced model and to decide upon the model validity. At the end, we observed that the result of the simulation is very close to the produced model.
Resumo:
Identification, prediction, and control of a system are engineering subjects, regardless of the nature of the system. Here, the temporal evolution of the number of individuals with dengue fever weekly recorded in the city of Rio de Janeiro, Brazil, during 2007, is used to identify SIS (susceptible-infective-susceptible) and SIR (susceptible-infective-removed) models formulated in terms of cellular automaton (CA). In the identification process, a genetic algorithm (GA) is utilized to find the probabilities of the state transition S -> I able of reproducing in the CA lattice the historical series of 2007. These probabilities depend on the number of infective neighbors. Time-varying and non-time-varying probabilities, three different sizes of lattices, and two kinds of coupling topology among the cells are taken into consideration. Then, these epidemiological models built by combining CA and GA are employed for predicting the cases of sick persons in 2008. Such models can be useful for forecasting and controlling the spreading of this infectious disease.
Resumo:
Aims. We investigate the time-varying patterns in line profiles, V/R, and radial velocity of the Be star HD 173948 (lambda Pavonis). Methods. Time series analyses of radial velocity, V/R, and line profiles of He I, Fe II, and Si II were performed with the Cleanest algorithm. An estimate of the stellar rotation frequency was derived from the stellar mass and radius in the Roche limit by adopting an aspect angle i derived from the fittings of non-LTE model spectra affected by rotation. The projected rotation velocity, necessary as input for the spectral synthesis procedure, was evaluated from the Fourier transform of the rotation profiles of all neutral helium lines in the optical range. Results. Emission episodes in Balmer and He i lines, as well as V/R cyclic variations, are reported for spectra observed in year 1999, followed by a relatively quiescent phase (2000) and then again a new active epoch (2001). From time series analyses of line profiles, radial velocities, and V/R ratios, four signals with high confidence levels are detected: nu(1) = 0.17 +/- 0.02, nu(2) = 0.49 +/- 0.05, nu(3) = 0.82 +/- 0.03, and nu(4) = 1.63 +/- 0.04 c/d. We interpret nu 4 as a non-radial pulsation g-mode, nu 3 as a signal related to the orbital timescale of ejected material, which is near the theoretical rotation frequency 0.81 c/d inferred from the fitting of the models taken into account for gravity darkening. The signals nu(1) and nu(2) are viewed as aliases of nu(3) and nu(4).
Resumo:
An order of magnitude sensitivity gain is described for using quasar spectra to investigate possible time or space variation in the fine structure constant alpha. Applied to a sample of 30 absorption systems, spanning redshifts 0.5 < z < 1.6, we derive limits on variations in alpha over a wide range of epochs. For the whole sample, Delta alpha/alpha = (-1.1 +/- 0.4) x 10(-5). This deviation is dominated by measurements at z > 1, where Delta alpha/alpha = (-1.9 +/- 0.5) x 10(-5). For z < 1, Delta alpha/alpha = (-0.2 +/- 0.4) x 10(-5). While this is consistent with a time-varying alpha, further work is required to explore possible systematic errors in the data, although careful searches have so far revealed none.
Resumo:
Testing ecological models for management is an increasingly important part of the maturation of ecology as an applied science. Consequently, we need to work at applying fair tests of models with adequate data. We demonstrate that a recent test of a discrete time, stochastic model was biased towards falsifying the predictions. If the model was a perfect description of reality, the test falsified the predictions 84% of the time. We introduce an alternative testing procedure for stochastic models, and show that it falsifies the predictions only 5% of the time when the model is a perfect description of reality. The example is used as a point of departure to discuss some of the philosophical aspects of model testing.
Resumo:
Dormancy release in seeds of Lolium rigidum Gaud. (annual ryegrass) was investigated in relation to temperature and seed water content. Freshly matured seeds were collected from cropping fields at Wongan Hills and Merredin, Western Australia. Seeds from Wongan Hills were equilibrated to water contents between 6 and 18% dry weight and after-ripened at constant temperatures between 9 and 50degreesC for up to 23 weeks. Wongan Hills and Merredin seeds at water contents between 7 and 17% were also after-ripened in full sun or shade conditions. Dormancy was tested at regular intervals during after-ripening by germinating seeds on agar at 12-h alternating 15degreesC (dark) and 25degreesC (light) periods. Rate of dormancy release for Wongan Hills seeds was a positive linear function of after-ripening temperature above a base temperature (T-b) of 5.4degreesC. A thermal after-ripening time model for dormancy loss accounting for seed moisture in the range 6-18% was developed using germination data for Wongan Hills seeds after-ripened at constant temperatures. The model accurately predicted dormancy release for Wongan Hills seeds after-ripened under naturally fluctuating temperatures. Seeds from Merredin responded similarly but had lower dormancy at collection and a faster rate of dormancy release in seeds below 9% water content.
Resumo:
Most biogeographical studies propose that southern temperate faunal disjunctions are either the result of vicariance of taxa originated in Gondwana or the result of transoceanic dispersal of taxa originated after the breakup of Gondwana. The aim of this paper is to show that this is a false dichotomy. Antarctica retained a mild climate until mid-Cenozoic and had lasting connections, notably with southern South America and Australia. Both taxa originally Gondwanan and taxa secondarily on Gondwanan areas were subjected to tectonic-induced vicariance, and there is no need to invoke ad hoc transoceanic dispersal, even for post-Gondwanan taxa. These different elements with circumantarctic distributions are here called `allochronic taxa` - taxa presently occupying the same area, but whose presence in that area does not belong to the same time period. This model allows accommodation of conflicting sources of evidence now available for many groups with circumantarctic distributions. The fact that the species from both layers are mixed up in the current biodiversity implies the need to use additional sources of evidence - such as biogeographical, palaeontological, geological and molecular - to discriminate which are the original Gondwanan and which are post-Gondwanan elements in austral landmasses.
Resumo:
The distributed-tubes model of hepatic elimination is extended to include intermixing between sinusoids, resulting in the formulation of a new, interconnected-tubes model. The new model is analysed for the simple case of two interconnected tubes, where an exact solution is obtained. For the case of many strongly-interconnected tubes, it is shown that a zeroth-order approximation leads to the convection-dispersion model. As a consequence the dispersion number is expressed, for the first time, in terms of its main physiological determinants: heterogeneity of flow and density of interconnections between sinusoids. The analysis of multiple indicator dilution data from a perfused liver preparation using the simplest version of the model yields the estimate 10.3 for the average number of interconnections. The problem of boundary conditions for the dispersion model is considered from the viewpoint that the dispersion-convection equation is a zeroth-order approximation to the equations for the interconnected-tubes model. (C) 1997 Academic Press Limited.