946 resultados para Unobserved-component model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An international standard, ISO/DP 9459-4 has been proposed to establish a uniform standard of quality for small, factory-made solar heating systerns. In this proposal, system components are tested separatelyand total system performance is calculated using system simulations based on component model parameter values validated using the results from the component tests. Another approach is to test the whole system in operation under representative conditions, where the results can be used as a measure of the general system performance. The advantage of system testing of this form is that it is not dependent on simulations and the possible inaccuracies of the models. Its disadvantage is that it is restricted to the boundary conditions for the test. Component testing and system simulation is flexible, but requires an accurate and reliable simulation model.The heat store is a key component conceming system performance. Thus, this work focuses on the storage system consisting store, electrical auxiliary heater, heat exchangers and tempering valve. Four different storage system configurations with a volume of 750 litre were tested in an indoor system test using a six -day test sequence. A store component test and system simulation was carried out on one of the four configurations, applying the proposed standard for stores, ISO/DP 9459-4A. Three newly developed test sequences for intemalload side heat exchangers, not in the proposed ISO standard, were also carried out. The MULTIPORT store model was used for this work. This paper discusses the results of the indoor system test, the store component test, the validation of the store model parameter values and the system simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los accidentes del tráfico son un fenómeno social muy relevantes y una de las principales causas de mortalidad en los países desarrollados. Para entender este fenómeno complejo se aplican modelos econométricos sofisticados tanto en la literatura académica como por las administraciones públicas. Esta tesis está dedicada al análisis de modelos macroscópicos para los accidentes del tráfico en España. El objetivo de esta tesis se puede dividir en dos bloques: a. Obtener una mejor comprensión del fenómeno de accidentes de trafico mediante la aplicación y comparación de dos modelos macroscópicos utilizados frecuentemente en este área: DRAG y UCM, con la aplicación a los accidentes con implicación de furgonetas en España durante el período 2000-2009. Los análisis se llevaron a cabo con enfoque frecuencista y mediante los programas TRIO, SAS y TRAMO/SEATS. b. La aplicación de modelos y la selección de las variables más relevantes, son temas actuales de investigación y en esta tesis se ha desarrollado y aplicado una metodología que pretende mejorar, mediante herramientas teóricas y prácticas, el entendimiento de selección y comparación de los modelos macroscópicos. Se han desarrollado metodologías tanto para selección como para comparación de modelos. La metodología de selección de modelos se ha aplicado a los accidentes mortales ocurridos en la red viaria en el período 2000-2011, y la propuesta metodológica de comparación de modelos macroscópicos se ha aplicado a la frecuencia y la severidad de los accidentes con implicación de furgonetas en el período 2000-2009. Como resultado de los desarrollos anteriores se resaltan las siguientes contribuciones: a. Profundización de los modelos a través de interpretación de las variables respuesta y poder de predicción de los modelos. El conocimiento sobre el comportamiento de los accidentes con implicación de furgonetas se ha ampliado en este proceso. bl. Desarrollo de una metodología para selección de variables relevantes para la explicación de la ocurrencia de accidentes de tráfico. Teniendo en cuenta los resultados de a) la propuesta metodológica se basa en los modelos DRAG, cuyos parámetros se han estimado con enfoque bayesiano y se han aplicado a los datos de accidentes mortales entre los años 2000-2011 en España. Esta metodología novedosa y original se ha comparado con modelos de regresión dinámica (DR), que son los modelos más comunes para el trabajo con procesos estocásticos. Los resultados son comparables, y con la nueva propuesta se realiza una aportación metodológica que optimiza el proceso de selección de modelos, con escaso coste computacional. b2. En la tesis se ha diseñado una metodología de comparación teórica entre los modelos competidores mediante la aplicación conjunta de simulación Monte Cario, diseño de experimentos y análisis de la varianza ANOVA. Los modelos competidores tienen diferentes estructuras, que afectan a la estimación de efectos de las variables explicativas. Teniendo en cuenta el estudio desarrollado en bl) este desarrollo tiene el propósito de determinar como interpretar la componente de tendencia estocástica que un modelo UCM modela explícitamente, a través de un modelo DRAG, que no tiene un método específico para modelar este elemento. Los resultados de este estudio son importantes para ver si la serie necesita ser diferenciada antes de modelar. b3. Se han desarrollado nuevos algoritmos para realizar los ejercicios metodológicos, implementados en diferentes programas como R, WinBUGS, y MATLAB. El cumplimiento de los objetivos de la tesis a través de los desarrollos antes enunciados se remarcan en las siguientes conclusiones: 1. El fenómeno de accidentes del tráfico se ha analizado mediante dos modelos macroscópicos. Los efectos de los factores de influencia son diferentes dependiendo de la metodología aplicada. Los resultados de predicción son similares aunque con ligera superioridad de la metodología DRAG. 2. La metodología para selección de variables y modelos proporciona resultados prácticos en cuanto a la explicación de los accidentes de tráfico. La predicción y la interpretación también se han mejorado mediante esta nueva metodología. 3. Se ha implementado una metodología para profundizar en el conocimiento de la relación entre las estimaciones de los efectos de dos modelos competidores como DRAG y UCM. Un aspecto muy importante en este tema es la interpretación de la tendencia mediante dos modelos diferentes de la que se ha obtenido información muy útil para los investigadores en el campo del modelado. Los resultados han proporcionado una ampliación satisfactoria del conocimiento en torno al proceso de modelado y comprensión de los accidentes con implicación de furgonetas y accidentes mortales totales en España. ABSTRACT Road accidents are a very relevant social phenomenon and one of the main causes of death in industrialized countries. Sophisticated econometric models are applied in academic work and by the administrations for a better understanding of this very complex phenomenon. This thesis is thus devoted to the analysis of macro models for road accidents with application to the Spanish case. The objectives of the thesis may be divided in two blocks: a. To achieve a better understanding of the road accident phenomenon by means of the application and comparison of two of the most frequently used macro modelings: DRAG (demand for road use, accidents and their gravity) and UCM (unobserved components model); the application was made to van involved accident data in Spain in the period 2000-2009. The analysis has been carried out within the frequentist framework and using available state of the art software, TRIO, SAS and TRAMO/SEATS. b. Concern on the application of the models and on the relevant input variables to be included in the model has driven the research to try to improve, by theoretical and practical means, the understanding on methodological choice and model selection procedures. The theoretical developments have been applied to fatal accidents during the period 2000-2011 and van-involved road accidents in 2000-2009. This has resulted in the following contributions: a. Insight on the models has been gained through interpretation of the effect of the input variables on the response and prediction accuracy of both models. The behavior of van-involved road accidents has been explained during this process. b1. Development of an input variable selection procedure, which is crucial for an efficient choice of the inputs. Following the results of a) the procedure uses the DRAG-like model. The estimation is carried out within the Bayesian framework. The procedure has been applied for the total road accident data in Spain in the period 2000-2011. The results of the model selection procedure are compared and validated through a dynamic regression model given that the original data has a stochastic trend. b2. A methodology for theoretical comparison between the two models through Monte Carlo simulation, computer experiment design and ANOVA. The models have a different structure and this affects the estimation of the effects of the input variables. The comparison is thus carried out in terms of the effect of the input variables on the response, which is in general different, and should be related. Considering the results of the study carried out in b1) this study tries to find out how a stochastic time trend will be captured in DRAG model, since there is no specific trend component in DRAG. Given the results of b1) the findings of this study are crucial in order to see if the estimation of data with stochastic component through DRAG will be valid or whether the data need a certain adjustment (typically differencing) prior to the estimation. The model comparison methodology was applied to the UCM and DRAG models, considering that, as mentioned above, the UCM has a specific trend term while DRAG does not. b3. New algorithms were developed for carrying out the methodological exercises. For this purpose different softwares, R, WinBUGs and MATLAB were used. These objectives and contributions have been resulted in the following findings: 1. The road accident phenomenon has been analyzed by means of two macro models: The effects of the influential input variables may be estimated through the models, but it has been observed that the estimates vary from one model to the other, although prediction accuracy is similar, with a slight superiority of the DRAG methodology. 2. The variable selection methodology provides very practical results, as far as the explanation of road accidents is concerned. Prediction accuracy and interpretability have been improved by means of a more efficient input variable and model selection procedure. 3. Insight has been gained on the relationship between the estimates of the effects using the two models. A very relevant issue here is the role of trend in both models, relevant recommendations for the analyst have resulted from here. The results have provided a very satisfactory insight into both modeling aspects and the understanding of both van-involved and total fatal accidents behavior in Spain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality can be influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigation of four urban residential catchments and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling outcomes indicate that selecting smaller average recurrence interval (ARI) events with high intensity-short duration as the threshold for the treatment system design is the most feasible since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of rainfall events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to develop a multi-component model that can be used to maximise indoor environmental quality inside mechanically ventilated office buildings, while minimising energy usage. The integrated model, which was developed and validated from fieldwork data, was employed to assess the potential improvement of indoor air quality and energy saving under different ventilation conditions in typical air-conditioned office buildings in the subtropical city of Brisbane, Australia. When operating the ventilation system under predicted optimal conditions of indoor environmental quality and energy conservation and using outdoor air filtration, average indoor particle number (PN) concentration decreased by as much as 77%, while indoor CO2 concentration and energy consumption were not significantly different compared to the normal summer time operating conditions. Benefits of operating the system with this algorithm were most pronounced during the Brisbane’s mild winter. In terms of indoor air quality, average indoor PN and CO2 concentrations decreased by 48% and 24%, respectively, while potential energy savings due to free cooling went as high as 108% of the normal winter time operating conditions. The application of such a model to the operation of ventilation systems can help to significantly improve indoor air quality and energy conservation in air-conditioned office buildings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Preventing risk factor exposure is vital to reduce the high burden from lung cancer. The leading risk factor for developing lung cancer is tobacco smoking. In Australia, despite apparent success in reducing smoking prevalence, there is limited information on small area patterns and small area temporal trends. We sought to estimate spatio-temporal patterns for lung cancer risk factors using routinely collected population-based cancer data. Methods: The analysis used a Bayesian shared component spatio-temporal model, with male and female lung cancer included separately. The shared component reflected exposure to lung cancer risk factors, and was modelled over 477 statistical local areas (SLAs) and 15 years in Queensland, Australia. Analyses were also run adjusting for area-level socioeconomic disadvantage, Indigenous population composition, or remoteness. Results: Strong spatial patterns were observed in the underlying risk factor exposure for both males (median Relative Risk (RR) across SLAs compared to the Queensland average ranged from 0.48-2.00) and females (median RR range across SLAs 0.53-1.80), with high exposure observed in many remote areas. Strong temporal trends were also observed. Males showed a decrease in the underlying risk across time, while females showed an increase followed by a decrease in the final two years. These patterns were largely consistent across each SLA. The high underlying risk estimates observed among disadvantaged, remote and indigenous areas decreased after adjustment, particularly among females. Conclusion: The modelled underlying exposure appeared to reflect previous smoking prevalence, with a lag period of around 30 years, consistent with the time taken to develop lung cancer. The consistent temporal trends in lung cancer risk factors across small areas support the hypothesis that past interventions have been equally effective across the state. However, this also means that spatial inequalities have remained unaddressed, highlighting the potential for future interventions, particularly among remote areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tämän itsenäisistä osatutkimuksista koostuvan tutkimussarjan tavoitteena oli pyrkiä täydentämään kuvaa matemaattisilta taidoiltaan heikkojen lasten ja nuorten tiedonkäsittelyvalmiuksista selvittämällä, ovatko visuaalis-spatiaaliset työmuistivalmiudet yhteydessä matemaattiseen suoriutumiseen. Teoreettinen viitekehys rakentui Baddeleyn (1986, 1997) kolmikomponenttimallin ympärille. Työmuistikäsitys oli kuitenkin esikuvaansa laajempi sisällyttäen visuaalis-spatiaaliseen työmuistiin Cornoldin ja Vecchin (2003) termein sekä passiiviset varastotoiminnot että aktiiviset prosessointitoiminnot. Yhteyksiä työmuistin ja matemaattisten taitojen välillä tarkasteltiin viiden eri osatutkimuksen avulla. Kaksi ensimmäistä keskittyivät alle kouluikäisten lukukäsitteen hallinnan ja visuaalis-spatiaalisten työmuistivalmiuksen tutkimiseen ja kolme jälkimmäistä peruskoulun yhdeksäsluokkalaisten matemaattisten taitojen ja visuaalis-spatiaalisten työmuistitaitojen välisten yhteyksien selvittämiseen. Tutkimussarjan avulla pyrittiin selvittämään, ovatko visuaalis-spatiaaliset työmuistivalmiudet yhteydessä matemaattiseen suoriutumiseen sekä esi- että yläkouluiässä (osatutkimukset I, II, III, IV, V), onko yhteys spesifi rajoittuen tiettyjen visuaalis-spatiaalisten valmiuksien ja matemaattisen suoriutumisen välille vai onko se yleinen koskien matemaattisia taitoja ja koko visuaalis-spatiaalista työmuistia (osatutkimukset I, II, III, IV, V) tai työmuistia laajemmin (osatutkimukset II, III) sekä onko yhteys työmuistispesifi vai selitettävissä älykkyyden kaltaisella yleisellä päättelykapasiteetilla (osatutkimukset I, II, IV). Tutkimussarjan tulokset osoittavat, että kyky säilyttää ja käsitellä hetkellisesti visuaalis-spatiaalista informaatiota on yhteydessä matemaattiseen suoriutumiseen eikä yhteyttä voida selittää yksinomaan joustavalla älykkyydellä. Suoriutuminen visuaalis-spatiaalista työmuistia mittaavissa tehtävissä on yhteydessä sekä alle kouluikäisten esimatemaattisten taitojen hallintaan että peruskoulun yhdeksäsluokkalaisten matematiikan taitoihin. Matemaattisilta taidoiltaan heikkojen lasten ja nuorten visuaalis-spatiaalisten työmuistiresurssien heikkoudet vaikuttavat kuitenkin olevan sangen spesifejä rajoittuen tietyntyyppisissä muistitehtävissä vaadittaviin valmiuksiin; kaikissa visuaalis-spatiaalisen työmuistin valmiuksia mittaavissa tehtävissä suoriutuminen ei ole yhteydessä matemaattisiin taitoihin. Työmuistivalmiuksissa ilmenevät erot sekä alle kouluikäisten että kouluikäisten matemaattisilta taidoiltaan heikkojen ja normaalisuoriutujien välillä näyttävät olevan kuitenkin jossain määrin yhteydessä kielellisiin taitoihin viitaten vaikeuksien tietynlaiseen kasautumiseen; niillä matemaattisesti heikoilla, joilla on myös kielellisiä vaikeuksia, on keskimäärin laajemmat työmuistiheikkoudet. Osalla matematiikassa heikosti suoriutuvista on näin ollen selvästi keskimääräistä heikommat visuaalis-spatiaaliset työmuistivalmiudet, ja tämä heikkous saattaa olla yksi mahdollinen syy tai vaikeuksia lisäävä tekijä heikon matemaattisen suoriutumisen taustalla. Visuaalis-spatiaalisen työmuistin heikkous merkitsee konkreettisesti vähemmän mentaalista prosessointitilaa, joka rajoittaa oppimista ja suoritustilanteita. Tiedonkäsittelyvalmiuksien heikkous liittyy nimenomaan oppimisnopeuteen, ei asioiden opittavuuteen sinänsä. Mikäli oppimisympäristö ottaa huomioon valmiuksien rajallisuuden, työmuistiheikkoudet eivät todennäköisesti estä asioiden oppimista sinänsä. Avainsanat: Työmuisti, visuaalis-spatiaalinen työmuisti, matemaattiset taidot, lukukäsite, matematiikan oppimisvaikeudet

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality is influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigations in four urban residential catchments based at Gold Coast, Australia, and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling results confirmed that high intensity-short duration events produce 58.0% of TS load while they only generated 29.1% of total runoff volume. Additionally, rainfall events smaller than 6-month average recurrence interval (ARI) generates a greater cumulative runoff volume (68.4% of the total annual runoff volume) and TS load (68.6% of the TS load exported) than the rainfall events larger than 6-month ARI. The results suggest that for the study catchments, stormwater treatment design could be based on the rainfall which had a mean value of 31 mm/h average intensity and 0.4 h duration. These outcomes also confirmed that selecting smaller ARI rainfall events with high intensity-short duration as the threshold for treatment system design is the most feasible approach since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: Composition of the coronary artery plaque is known to have critical role in heart attack. While calcified plaque can easily be diagnosed by conventional CT, it fails to distinguish between fibrous and lipid rich plaques. In the present paper, the authors discuss the experimental techniques and obtain a numerical algorithm by which the electron density (rho(e)) and the effective atomic number (Z(eff)) can be obtained from the dual energy computed tomography (DECT) data. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques. Methods: For the purpose of calibration of the CT machine, the authors prepare aqueous samples whose calculated values of (rho(e), Z(eff)) lie in the range of (2.65 x 10(23) <= rho(e) <= 3.64 x 10(23)/cm(3)) and (6.80 <= Z(eff) <= 8.90). The authors fill the phantom with these known samples and experimentally determine HU(V-1) and HU(V-2), with V-1,V-2 = 100 and 140 kVp, for the same pixels and thus determine the coefficients of inversion that allow us to determine (rho(e), Z(eff)) from the DECT data. The HU(100) and HU(140) for the coronary artery plaque are obtained by filling the channel of the coronary artery with a viscous solution of methyl cellulose in water, containing 2% contrast. These (rho(e), Z(eff)) values of the coronary artery plaque are used for their characterization on the basis of theoretical models of atomic compositions of the plaque materials. These results are compared with histopathological report. Results: The authors find that the calibration gives Pc with an accuracy of 3.5% while Z(eff) is found within 1% of the actual value, the confidence being 95%. The HU(100) and HU(140) are found to be considerably different for the same plaque at the same position and there is a linear trend between these two HU values. It is noted that pure lipid type plaques are practically nonexistent, and microcalcification, as observed in histopathology, has to be taken into account to explain the nature of the observed (rho(e), Z(eff)) data. This also enables us to judge the composition of the plaque in terms of basic model which considers the plaque to be composed of fibres, lipids, and microcalcification. Conclusions: This simple and reliable method has the potential as an effective modality to investigate the composition of noncalcified coronary artery plaques and thus help in their characterization. In this inversion method, (rho(e), Z(eff)) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (rho(e), Z(eff)) does not interfere with each other and the nature of the plaque can be identified in terms of a three component model. (C) 2015 American Association of Physicists in Medicine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste estudo foram verificadas as implicações da gestão por competências nos comportamentos de cidadania organizacional (CCO) e no comprometimento organizacional. Buscamos verificar até que ponto os modelos de gestão de pessoas baseados em competências podem influenciar positiva ou negativamente os CCO e o comprometimento, ou seja, até que ponto estas práticas funcionam como um elemento que estimula ou auxilia na desintegração, reduzindo a adoção dos comportamentos de cidadania organizacional e também o comprometimento com a organização. A pesquisa foi realizada em uma organização multinacional fabricante de produtos óticos, que utiliza os modelos de gestão baseados em competências para seleção, avaliação e remuneração variável de seus funcionários há três anos. Para investigação do comprometimento organizacional adotamos como referencial o Modelo de Três Componentes de Meyer e Allen. Para a investigação dos comportamentos de cidadania organizacional utilizamos como referência as quatro dimensões encontradas a partir dos estudos de Armênio Rego e seus colaboradores. A metodologia adotada combinou métodos quantitativos e qualitativos. Nos levantamentos quantitativos usamos a escala para mensurar o comprometimento desenvolvida por Meyer Allen e Smith, traduzida e validada para o contexto brasileiro por Medeiros, e a escala de comportamentos de cidadania organizacional desenvolvida por Armênio Rego e colaboradores. Os resultados alcançados indicam que a população estudada apresenta um grau satisfatório de comprometimento global, com maior presença da dimensão afetiva. Com relação aos CCO, os mesmos também encontram-se presentes em um grau adequado nesta amostra com maior destaque nas dimensões harmonia interpessoal e iniciativa. Dentre as conclusões obtidas a partir do estudo, destacamos que os resultados indicam a existência de variáveis organizacionais que parecem estar contribuindo para neutralizar a competição e o individualimo, por vezes potencializado pelas práticas de gestão baseadas em competências.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although increasing the turbine inlet temperature has traditionally proved the surest way to increase cycle efficiency, recent work suggests that the performance of future gas turbines may be limited by increased cooling flows and losses. Another limiting scenario concerns the effect on cycle performance of real gas properties at high temperatures. Cycle calculations of uncooled gas turbines show that when gas properties are modelled accurately, the variation of cycle efficiency with turbine inlet temperature at constant pressure ratio exhibits a maximum at temperatures well below the stoichiometric limit. Furthermore, the temperature at the maximum decreases with increasing compressor and turbine polytropic efficiency. This behaviour is examined in the context of a two-component model of the working fluid. The dominant influences come from the change of composition of the combustion products with varying air/fuel ratio (particularly the contribution from the water vapour) together with the temperature variation of the specific heat capacity of air. There are implications for future industrial development programmes, particularly in the context of advanced mixed gas-steam cycles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ballistic spin polarized transport through diluted magnetic semiconductor single and double barrier structures is investigated theoretically using a two-component model. The tunneling magnetoresistance (TMR) of the system exhibits oscillating behavior when the magnetic field is varied. An interesting beat pattern in the TMR and spin polarization is found for different nonmagnetic semiconductor/diluted magnetic semiconductor double barrier structures which arises from an interplay between the spin-up and spin-down electron channels which are split by the s-d exchange interaction.