914 resultados para non- linear Phillips curve


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon fibre reinforced polymer (CFRP) sheets have many outstanding properties such as high strength, high elastic modulus, light weight and good durability which are made them a suitable alternative for steel in strengthening work. This paper describe the ultimate load carrying capacity of steel hollow sections at effective bond length in terms of its cross sectional area and the stress distribution within bond region for different layers CFRP. It was found that depending on their size and orientation of uni- directional CFRP layers, the ultimate tensile load was different. Along with these tests, non linear finite element analysis was also performed to validate the ultimate load carrying capacity depending on their cross sections. The predicted ultimate loads from FE analysis are found very close to the laboratory test results. The validated model has been used to determine the stress distribution at bond joint for different orientation of CFRP. This research shows the effect of stress distribution and suitable wrapping layer to be used for the strengthening of steel hollow sections in tension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new active noise control (ANC) technique. The technique has a feedback structure to have a simple configuration in practical implementation. In this approach, the secondary path is modelled online to ensure convergence of the system as the secondary paths are practically time varying or non-linear. The proposed method consists of two steps: a noise controller which is based on a modified FxLMS algorithm, and a new variable step size (VSS) LMS algorithm which is used to adapt the modelling filter with the secondary path. The proposed algorithm stops injection of the white noise at the optimum point and reactivate the injection during the operation, if needed, to maintain performance of the system. Eliminating continuous injection of the white noise increases the performance of the proposed method significantly and makes it more desirable for practical ANC systems. The computer simulations are presented to show the effectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the effects of extremely cold and hot temperatures on ischaemic heart disease (IHD) mortality in five cities (Beijing, Tianjin, Shanghai, Wuhan and Guangzhou) in China; and to examine the time relationships between cold and hot temperatures and IHD mortality for each city. Design: A negative binomial regression model combined with a distributed lag non-linear model was used to examine city-specific temperature effects on IHD mortality up to 20 lag days. A meta-analysis was used to pool the cold effects and hot effects across the five cities. Patients: 16 559 IHD deaths were monitored by a sentinel surveillance system in five cities during 2004–2008. Results: The relationships between temperature and IHD mortality were non-linear in all five cities. The minimum-mortality temperatures in northern cities were lower than in southern cities. In Beijing, Tianjin and Guangzhou, the effects of extremely cold temperatures were delayed, while Shanghai and Wuhan had immediate cold effects. The effects of extremely hot temperatures appeared immediately in all the cities except Wuhan. Meta-analysis showed that IHD mortality increased 48% at the 1st percentile of temperature (extremely cold temperature) compared with the 10th percentile, while IHD mortality increased 18% at the 99th percentile of temperature (extremely hot temperature) compared with the 90th percentile. Conclusions: Results indicate that both extremely cold and hot temperatures increase IHD mortality in China. Each city has its characteristics of heat effects on IHD mortality. The policy for response to climate change should consider local climate–IHD mortality relationships.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Understanding the mechanical properties of tendon is an important step to guiding the process of improving athletic performance, predicting injury and treating tendinopathies. The speed of sound in a medium is governed by the bulk modulus and density for fluids and isotropic materials. However, for tendon,which is a structural composite of fluid and collagen, there is some anisotropy requiring an adjustment for Poisson’s ratio. In this paper, these relationships are explored and modelled using data collected, in vivo, on human Achilles tendon. Estimates for elastic modulus and hysteresis based on speed of sound data are then compared against published values from in vitro mechanical tests. Methods: Measurements using clinical ultrasound imaging, inverse dynamics and acoustic transmission techniques were used to determine dimensions, loading conditions and longitudinal speed of sound for the Achilles tendon during a series of isometric plantar flexion exercises against body weight. Upper and lower bounds for speed of sound versus tensile stress in the tendon were then modelled and estimates derived for elastic modulus and hysteresis. Results: Axial speed of sound varied between 1850 to 2090 m.s−1 with a non-linear, asymptotic dependency on the level of tensile stress in the tendon 5–35 MPa. Estimates derived for the elastic modulus ranged between 1–2 GPa. Hysteresis derived from models of the stress-strain relationship, ranged from 3–11%. These values agree closely with those previously reported from direct measurements obtained via in vitro mechanical tensile tests on major weight bearing tendons. Discussion: There is sufficiently good agreement between these indirect (speed of sound derived) and direct (mechanical tensile test derived) measures of tendon mechanical properties to validate the use of this non-invasive acoustic transmission technique. This non-invasive method is suitable for monitoring changes in tendon properties as predictors of athletic performance, injury or therapeutic progression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper has presented the details of an investigation into the flexural and flexuraltorsional buckling behaviour of cold-formed structural steel columns with pinned and fixed ends. Current design rules for the member capacities of cold-formed steel columns are based on the same non-dimensional strength curve for both fixed and pinned-ended columns. This research has reviewed the accuracy of the current design rules in AS/NZS 4600 and the North American Specification in determining the member capacities of cold-formed steel columns using the results from detailed finite element analyses and an experimental study of lipped channel columns. It was found that the current Australian and American design rules accurately predicted the member capacities of pin ended lipped channel columns undergoing flexural and flexural torsional buckling. However, for fixed ended columns with warping fixity undergoing flexural-torsional buckling, it was found that the current design rules significantly underestimated the column capacities as they disregard the beneficial effect of warping fixity. This paper has therefore proposed improved design rules and verified their accuracy using finite element analysis and test results of cold-formed lipped channel columns made of three cross-sections and five different steel grades and thicknesses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates whether and how a firm’s ownership and corporate governance affect its timeliness of price discovery, which is referred to as the speed of incorporation of value-relevant information into the stock price. Using a panel data of 1,138 Australian firm-year observations from 2001 to 2008, we predict and find a non-linear relationship between ownership concentration and the timeliness of price discovery. We test the identity of the largest shareholder and find that only firms with family as the largest shareholder exhibit faster price discovery. There is no evidence that suggests that the presence of a second largest shareholder affects the timeliness of price discovery materially. Although we find a positive association between corporate governance quality and the timeliness of price discovery, as expected, there is no interaction effect between the largest shareholding and corporate governance in relation to the timeliness of price discovery. Further tests show no evidence of severe endogeneity problems in our study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. This paper contains a comprehensive set of analytical benchmark solutions for steel frames comprising non-compact sections, which can be used to verify the accuracy of simplified concentrated plasticity methods of advanced analysis. The analytical benchmark solutions were obtained using a distributed plasticity shell finite element model that explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. A brief description and verification of the shell finite element model is provided in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives To examine the effect of extreme temperatures on emergency department admissions (EDAs) for childhood asthma. Methods An ecological design was used in this study. A Poisson linear regression model combined with a distributed lag non-linear model was used to quantify the effect of temperature on EDAs for asthma among children aged 0–14 years in Brisbane, Australia, during January 2003–December 2009, while controlling for air pollution, relative humidity, day of the week, season and long-term trends. The model residuals were checked to identify whether there was an added effect due to heat waves or cold spells. Results There were 13 324 EDAs for childhood asthma during the study period. Both hot and cold temperatures were associated with increases in EDAs for childhood asthma, and their effects both appeared to be acute. An added effect of heat waves on EDAs for childhood asthma was observed, but no added effect of cold spells was found. Male children and children aged 0–4 years were most vulnerable to heat effects, while children aged 10–14 years were most vulnerable to cold effects. Conclusions Both hot and cold temperatures seemed to affect EDAs for childhood asthma. As climate change continues, children aged 0–4 years are at particular risk for asthma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osteocyte cells are the most abundant cells in human bone tissue. Due to their unique morphology and location, osteocyte cells are thought to act as regulators in the bone remodelling process, and are believed to play an important role in astronauts’ bone mass loss after long-term space missions. There is increasing evidence showing that an osteocyte’s functions are highly affected by its morphology. However, changes in an osteocyte’s morphology under an altered gravity environment are still not well documented. Several in vitro studies have been recently conducted to investigate the morphological response of osteocyte cells to the microgravity environment, where osteocyte cells were cultured on a two-dimensional flat surface for at least 24 hours before microgravity experiments. Morphology changes of osteocyte cells in microgravity were then studied by comparing the cell area to 1g control cells. However, osteocyte cells found in vivo are with a more 3D morphology, and both cell body and dendritic processes are found sensitive to mechanical loadings. A round shape osteocyte’s cells support a less stiff cytoskeleton and are more sensitive to mechanical stimulations compared with flat cellular morphology. Thus, the relative flat and spread shape of isolated osteocytes in 2D culture may greatly hamper their sensitivity to a mechanical stimulus, and the lack of knowledge on the osteocyte’s morphological characteristics in culture may lead to subjective and noncomprehensive conclusions of how altered gravity impacts on an osteocyte’s morphology. Through this work empirical models were developed to quantitatively predicate the changes of morphology in osteocyte cell lines (MLO-Y4) in culture, and the response of osteocyte cells, which are relatively round in shape, to hyper-gravity stimulation has also been investigated. The morphology changes of MLO-Y4 cells in culture were quantified by measuring cell area and three dimensionless shape features including aspect ratio, circularity and solidity by using widely accepted image analysis software (ImageJTM). MLO-Y4 cells were cultured at low density (5×103 per well) and the changes in morphology were recorded over 10 hours. Based on the data obtained from the imaging analysis, empirical models were developed using the non-linear regression method. The developed empirical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analysing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary. The morphological response of MLO-Y4 cells with a relatively round morphology to hyper-gravity environment has been investigated using a centrifuge. After 2 hours culture, MLO-Y4 cells were exposed to 20g for 30mins. Changes in the morphology of MLO-Y4 cells are quantitatively analysed by measuring the average value of cell area and dimensionless shape factors such as aspect ratio, solidity and circularity. In this study, no significant morphology changes were detected in MLO-Y4 cells under a hyper-gravity environment (20g for 30 mins) compared with 1g control cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As Earth's climate is rapidly changing, the impact of ambient temperature on health outcomes has attracted increasing attention in the recent time. Considerable number of excess deaths has been reported because of exposure to ambient hot and cold temperatures. However, relatively little research has been conducted on the relation between temperature and morbidity. The aim of this study was to characterize the relationship between both hot and cold temperatures and emergency hospital admissions in Brisbane, Australia, and to examine whether the relation varied by age and socioeconomic factors. It aimed to explore lag structures of temperature–morbidity association for respiratory causes, and to estimate the magnitude of emergency hospital admissions for cardiovascular diseases attributable to hot and cold temperatures for the large contribution of both diseases to the total emergency hospital admissions. A time series study design was applied using routinely collected data of daily emergency hospital admissions, weather and air pollution variables in Brisbane during 1996–2005. Poisson regression model with a distributed lag non-linear structure was adopted to assess the impact of temperature on emergency hospital admissions after adjustment for confounding factors. Both hot and cold effects were found, with higher risk of hot temperatures than that of cold temperatures. Increases in mean temperature above 24.2oC were associated with increased morbidity, especially for the elderly ≥ 75 years old with the largest effect. The magnitude of the risk estimates of hot temperature varied by age and socioeconomic factors. High population density, low household income, and unemployment appeared to modify the temperature–morbidity relation. There were different lag structures for hot and cold temperatures, with the acute hot effect within 3 days after hot exposure and about 2-week lagged cold effect on respiratory diseases. A strong harvesting effect after 3 days was evident for respiratory diseases. People suffering from cardiovascular diseases were found to be more vulnerable to hot temperatures than cold temperatures. However, more patients admitted for cardiovascular diseases were attributable to cold temperatures in Brisbane compared with hot temperatures. This study contributes to the knowledge base about the association between temperature and morbidity. It is vitally important in the context of ongoing climate change. The findings of this study may provide useful information for the development and implementation of public health policy and strategic initiatives designed to reduce and prevent the burden of disease due to the impact of climate change.