959 resultados para Polynomial distributed lag models
Resumo:
Already in ancient Greece, Hippocrates postulated that disease showed a seasonal pattern characterised by excess winter mortality. Since then, several studies have confirmed this finding, and it was generally accepted that the increase in winter mortality was mostly due to respiratory infections and seasonal influenza. More recently, it was shown that cardiovascular disease (CVD) mortality also displayed such seasonality, and that the magnitude of the seasonal effect increased from the poles to the equator. The recent study by Yang et al assessed CVD mortality attributable to ambient temperature using daily data from 15 cities in China for years 2007-2013, including nearly two million CVD deaths. A high temperature variability between and within cities can be observed (figure 1). They used sophisticated statistical methodology to account for the complex temperature-mortality relationship; first, distributed lag non-linear models combined with quasi-Poisson regression to obtain city-specific estimates, taking into account temperature, relative humidity and atmospheric pressure; then, a meta-analysis to obtain the pooled estimates. The results confirm the winter excess mortality as reported by the Eurowinter3 and other4 groups, but they show that the magnitude of ambient temperature.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
This paper describes the design and implementation of an agent based network for the support of collaborative switching tasks within the control room environment of the National Grid Company plc. This work includes aspects from several research disciplines, including operational analysis, human computer interaction, finite state modelling techniques, intelligent agents and computer supported co-operative work. Aspects of these procedures have been used in the analysis of collaborative tasks to produce distributed local models for all involved users. These models have been used as the basis for the production of local finite state automata. These automata have then been embedded within an agent network together with behavioural information extracted from the task and user analysis phase. The resulting support system is capable of task and communication management within the transmission despatch environment.
Resumo:
The increase in ultraviolet radiation (UV) at surface, the high incidence of non-melanoma skin cancer (NMSC) in coast of Northeast of Brazil (NEB) and reduction of total ozone were the motivation for the present study. The overall objective was to identify and understand the variability of UV or Index Ultraviolet Radiation (UV Index) in the capitals of the east coast of the NEB and adjust stochastic models to time series of UV index aiming make predictions (interpolations) and forecasts / projections (extrapolations) followed by trend analysis. The methodology consisted of applying multivariate analysis (principal component analysis and cluster analysis), Predictive Mean Matching method for filling gaps in the data, autoregressive distributed lag (ADL) and Mann-Kendal. The modeling via the ADL consisted of parameter estimation, diagnostics, residuals analysis and evaluation of the quality of the predictions and forecasts via mean squared error and Pearson correlation coefficient. The research results indicated that the annual variability of UV in the capital of Rio Grande do Norte (Natal) has a feature in the months of September and October that consisting of a stabilization / reduction of UV index because of the greater annual concentration total ozone. The increased amount of aerosol during this period contributes in lesser intensity for this event. The increased amount of aerosol during this period contributes in lesser intensity for this event. The application of cluster analysis on the east coast of the NEB showed that this event also occurs in the capitals of Paraiba (João Pessoa) and Pernambuco (Recife). Extreme events of UV in NEB were analyzed from the city of Natal and were associated with absence of cloud cover and levels below the annual average of total ozone and did not occurring in the entire region because of the uneven spatial distribution of these variables. The ADL (4, 1) model, adjusted with data of the UV index and total ozone to period 2001-2012 made a the projection / extrapolation for the next 30 years (2013-2043) indicating in end of that period an increase to the UV index of one unit (approximately), case total ozone maintain the downward trend observed in study period
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.
Resumo:
The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.
Resumo:
It is well known that meteorological conditions influence the comfort and human health. Southern European countries, including Portugal, show the highest mortality rates during winter, but the effects of extreme cold temperatures in Portugal have never been estimated. The objective of this study was the estimation of the effect of extreme cold temperatures on the risk of death in Lisbon and Oporto, aiming the production of scientific evidence for the development of a real-time health warning system. Poisson regression models combined with distributed lag non-linear models were applied to assess the exposure-response relation and lag patterns of the association between minimum temperature and all-causes mortality and between minimum temperature and circulatory and respiratory system diseases mortality from 1992 to 2012, stratified by age, for the period from November to March. The analysis was adjusted for over dispersion and population size, for the confounding effect of influenza epidemics and controlled for long-term trend, seasonality and day of the week. Results showed that the effect of cold temperatures in mortality was not immediate, presenting a 1–2-day delay, reaching maximumincreased risk of death after 6–7 days and lasting up to 20–28 days. The overall effect was generally higher and more persistent in Lisbon than in Oporto, particularly for circulatory and respiratory mortality and for the elderly. Exposure to cold temperatures is an important public health problem for a relevant part of the Portuguese population, in particular in Lisbon.
Resumo:
This paper presents a review of modelling and control of biological nutrient removal (BNR)-activated sludge processes for wastewater treatment using distributed parameter models described by partial differential equations (PDE). Numerical methods for solution to the BNR-activated sludge process dynamics are reviewed and these include method of lines, global orthogonal collocation and orthogonal collocation on finite elements. Fundamental techniques and conceptual advances of the distributed parameter approach to the dynamics and control of activated sludge processes are briefly described. A critical analysis on the advantages of the distributed parameter approach over the conventional modelling strategy in this paper shows that the activated sludge process is more adequately described by the former and the method is recommended for application to the wastewater industry (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Background: People with less education in Europe, Asia, and the United States are at higher risk of mortality associated with daily and longer-term air pollution exposure. We examined whether educational level modified associations between mortality and ambient particulate pollution (PM(10)) in Latin America, using several timescales. Methods: The study population included people who died during 1998-2002 in Mexico City, Mexico; Santiago, Chile; and Sao Paulo, Brazil. We fit city-specific robust Poisson regressions to daily deaths for nonexternal-cause mortality, and then stratified by age, sex, and educational attainment among adults older than age 21 years (none, some primary, some secondary, and high school degree or more). Predictor variables included a natural spline for temporal trend, linear PM(10) and apparent temperature at matching lags, and day-of-week indicators. We evaluated PM(10) for lags 0 and I day, and fit an unconstrained distributed lag model for cumulative 6-day effects. Results: The effects of a 10-mu g/m(3) increment in lag 1 PM(10) on all nonextemal-cause adult mortality were for Mexico City 0.39% (95% confidence interval = 0.131/-0.65%); Sao Paulo 1.04% (0.71%-1.38%); and for Santiago 0.61% (0.40%-0.83%. We found cumulative 6-day effects for adult mortality in Santiago (0.86% [0.48%-1.23%]) and Sao Paulo (1.38% [0.85%-1.91%]), but no consistent gradients by educational status. Conclusions: PM(10) had important short- and intermediate-term effects on mortality in these Latin American cities, but associations did not differ consistently by educational level.
Resumo:
This work studied the structure-hepatic disposition relationships for cationic drugs of varying lipophilicity using a single-pass, in situ rat liver preparation. The lipophilicity among the cationic drugs studied in this work is in the following order: diltiazem. propranolol. labetalol. prazosin. antipyrine. atenolol. Parameters characterizing the hepatic distribution and elimination kinetics of the drugs were estimated using the multiple indicator dilution method. The kinetic model used to describe drug transport (the two-phase stochastic model) integrated cytoplasmic binding kinetics and belongs to the class of barrier-limited and space-distributed liver models. Hepatic extraction ratio (E) (0.30-0.92) increased with lipophilicity. The intracellular binding rate constant (k(on)) and the equilibrium amount ratios characterizing the slowly and rapidly equilibrating binding sites (K-S and K-R) increase with the lipophilicity of drug (k(on) : 0.05-0.35 s(-1); K-S : 0.61-16.67; K-R : 0.36-0.95), whereas the intracellular unbinding rate constant (k(off)) decreases with the lipophilicity of drug (0.081-0.021 s(-1)). The partition ratio of influx (k(in)) and efflux rate constant (k(out)), k(in)/k(out), increases with increasing pK(a) value of the drug [from 1.72 for antipyrine (pK(a) = 1.45) to 9.76 for propranolol (pK(a) = 9.45)], the differences in k(in/kout) for the different drugs mainly arising from ion trapping in the mitochondria and lysosomes. The value of intrinsic elimination clearance (CLint), permeation clearance (CLpT), and permeability-surface area product (PS) all increase with the lipophilicity of drug [CLint (ml . min(-1) . g(-1) of liver): 10.08-67.41; CLpT (ml . min(-1) . g(-1) of liver): 10.80-5.35; PS (ml . min(-1) . g(-1) of liver): 14.59-90.54]. It is concluded that cationic drug kinetics in the liver can be modeled using models that integrate the presence of cytoplasmic binding, a hepatocyte barrier, and a vascular transit density function.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The catastrophic disruption in the USA financial system in the wake of the financial crisis prompted the Federal Reserve to launch a Quantitative Easing (QE) programme in late 2008. In line with Pesaran and Smith (2014), I use a policy effectiveness test to assess whether this massive asset purchase programme was effective in stimulating the economic activity in the USA. Specifically, I employ an Autoregressive Distributed Lag Model (ARDL), in order to obtain a counterfactual for the USA real GDP growth rate. Using data from 1983Q1 to 2009Q4, the results show that the beneficial effects of QE appear to be weak and rather short-lived. The null hypothesis of policy ineffectiveness is not rejected, which suggests that QE did not have a meaningful impact on output growth.
Resumo:
In pediatric echocardiography, cardiac dimensions are often normalized for weight, height, or body surface area (BSA). The combined influence of height and weight on cardiac size is complex and likely varies with age. We hypothesized that increasing weight for height, as represented by body mass index (BMI) adjusted for age, is poorly accounted for in Z scores normalized for weight, height, or BSA. We aimed to evaluate whether a bias related to BMI was introduced when proximal aorta diameter Z scores are derived from bivariate models (only one normalizing variable), and whether such a bias was reduced when multivariable models are used. We analyzed 1,422 echocardiograms read as normal in children ≤18 years. We computed Z scores of the proximal aorta using allometric, polynomial, and multivariable models with four body size variables. We then assessed the level of residual association of Z scores and BMI adjusted for age and sex. In children ≥6 years, we found a significant residual linear association with BMI-for-age and Z scores for most regression models. Only a multivariable model including weight and height as independent predictors produced a Z score free of linear association with BMI. We concluded that a bias related to BMI was present in Z scores of proximal aorta diameter when normalization was done using bivariate models, regardless of the regression model or the normalizing variable. The use of multivariable models with weight and height as independent predictors should be explored to reduce this potential pitfall when pediatric echocardiography reference values are evaluated.
Resumo:
Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.