926 resultados para Bayesian Mixture Model, Cavalieri Method, Trapezoidal Rule


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.

This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.

In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

'Theory', 'hypothesis', 'model' and 'method' in linguistics: Semasiological and onomasiological perspectives The subject of this thesis is the use of generic scientific terms, in particular the four terms 'theory', 'hypothesis', 'model' and 'method', in linguistic research articles written in French and in Finnish. The thesis examines the types of scientific constructs to which these terms are applied, and seeks to explain the variation in the use of each term. A second objective of the thesis is to analyze the relationships among these terms, and the factors determining the choices made by writers. With its focus on the authentic use of generic scientific terms, the thesis complements the normative and theoretical descriptions of these terms in Science Studies and offers new information on actual writing practices. This thesis adheres to functional and usage-based linguistics, drawing its theoretical background from cognitive linguistics and from functional approaches to terminology. The research material consisted of 120 research articles (856 569 words), representing different domains of linguistics and written in French or Finnish (60 articles in each language). The articles were extracted from peer-reviewed scientific journals and were published between 2000 and 2010. The use of generic scientific terms in the material has been examined from semasiological and onomasiological perspectives. In the first stage, different usages related to each of the four central terms were analyzed. In the second stage, the analysis was extended to other terms and expressions, such as 'theoretical framework', 'approach' and ‘claim’, which were used to name scientific constructs similar to the four terms analyzed in the first stage. Finally, in order to account for the writer’s choice among the terms, a mixed methods approach was adopted, based on the results of a previously conducted questionnaire concerning the differences between these terms as experienced by linguists themselves. Despite the general ideal that scientific terms should be carefully defined, the study shows that the use of these central terms is not without ambiguity. What is understood by these terms may vary according to different conceptual and stylistic factors as well as epistemic and disciplinary traditions. In addition to their polysemy, the semantic potentials of these terms are in part overlapping. In most cases, the variation in the use of these terms is not likely to cause serious misunderstanding. Rather, it allows the researcher to express a specific conceptualization of the scientific constructs mentioned in the article. The discipline of linguistics, however, would benefit from a more elaborate metatheoretical discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ce travail présente une modélisation rapide d’ordre élévé capable de modéliser une configuration rotorique en cage complète ou en grille, de reproduire les courants de barre et tenir compte des harmoniques d’espace. Le modèle utilise une approche combinée d’éléments finis avec les circuits-couplés. En effet, le calcul des inductances est réalisé avec les éléments finis, ce qui confère une précision avancée au modèle. Cette méthode offre un gain important en temps de calcul sur les éléments finis pour des simulations transitoires. Deux outils de simulation sont développés, un dans le domaine du temps pour des résolutions dynamiques et un autre dans le domaine des phaseurs dont une application sur des tests de réponse en fréquence à l’arrêt (SSFR) est également présentée. La méthode de construction du modèle est décrite en détail de même que la procédure de modélisation de la cage du rotor. Le modèle est validé par l’étude de machines synchrones: une machine de laboratoire de 5.4 KVA et un grand alternateur de 109 MVA dont les mesures expérimentales sont comparées aux résultats de simulation du modèle pour des essais tels que des tests à vide, des courts-circuits triphasés, biphasés et un test en charge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markov Chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g. depletion index, carrying capacity assessment. Markov Chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho visa o desenvolvimento de um processo para a produção de biodiesel partindo de óleos de alta acidez, aplicando um processo em duas etapas de catálise homogênea. A primeira é a reação de esterificação etílica dos ácidos graxos livres, catalisada por H2SO4, ocorrendo no meio de triglicerídeos e a segunda é a transesterificação dos triglicerídeos remanescentes, ocorrendo no meio dos ésteres alquílicos da primeira etapa e catalisada com álcali (NaOH) e álcool etílico ou metílico. A reação de esterificação foi estudada com uma mistura modelo consistindo de óleo de soja neutro acidificado artificialmente com 15%p de ácido oleico PA. Este valor foi adotado, como referência, devido a certas gorduras regionais (óleo de mamona advinda de agricultura familiar, sebos de matadouro e óleo de farelo de arroz, etc.) apresentarem teores entre 10-20%p de ácidos graxos livres. Nas duas etapas o etanol é reagente e também solvente, sendo a razão molar mistura:álcool um dos parâmetros pesquisados nas relações 1:3, 1:6 e 1:9. Outros foram a temperatura 60 e 80ºC e a concentração percentual do catalisador, 0,5, 1,0 e 1,5%p, (em relação à massa de óleo). A combinatória destes parâmetros resultou em 18 reações. Dentre as condições reacionais estudadas, oito atingiram acidez aceitável inferior a 1,5%p possibilitando a definição das condições para aplicação ótima da segunda etapa. A melhor condição nesta etapa ocorreu quando a reação foi conduzida a 60°C com 1%p de H2SO4 e razão molar 1:6. No final da primeira etapa foram realizados tratamentos pertinentes como a retirada do catalisador e estudada sua influência sobre a acidez final, utilizando-se de lavagens com e sem adição de hexano, seguidas de evaporação ou adição de agente secante. Na segunda etapa estudaram-se as razões molares de óleo:álcool de 1:6 e 1:9 com álcool metílico e etílico, com 0,5 e 1%p de NaOH assim como o tratamento da reação (lavagem ou neutralização do catalisador) a 60°C, resultando em 16 experimentos. A melhor condição nesta segunda etapa ocorreu com 0,5%p de NaOH, razão molar óleo:etanol de 1:6 e somente as reações em que se aplicaram lavagens apresentaram índices de acidez adequados (<1,0%p) coerentes com os parâmetros da ANP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different types of base fluids, such as water, engine oil, kerosene, ethanol, methanol, ethylene glycol etc. are usually used to increase the heat transfer performance in many engineering applications. But these conventional heat transfer fluids have often several limitations. One of those major limitations is that the thermal conductivity of each of these base fluids is very low and this results a lower heat transfer rate in thermal engineering systems. Such limitation also affects the performance of different equipments used in different heat transfer process industries. To overcome such an important drawback, researchers over the years have considered a new generation heat transfer fluid, simply known as nanofluid with higher thermal conductivity. This new generation heat transfer fluid is a mixture of nanometre-size particles and different base fluids. Different researchers suggest that adding spherical or cylindrical shape of uniform/non-uniform nanoparticles into a base fluid can remarkably increase the thermal conductivity of nanofluid. Such augmentation of thermal conductivity could play a more significant role in enhancing the heat transfer rate than that of the base fluid. Nanoparticles diameters used in nanofluid are usually considered to be less than or equal to 100 nm and the nanoparticles concentration usually varies from 5% to 10%. Different researchers mentioned that the smaller nanoparticles concentration with size diameter of 100 nm could enhance the heat transfer rate more significantly compared to that of base fluids. But it is not obvious what effect it will have on the heat transfer performance when nanofluids contain small size nanoparticles of less than 100 nm with different concentrations. Besides, the effect of static and moving nanoparticles on the heat transfer of nanofluid is not known too. The idea of moving nanoparticles brings the effect of Brownian motion of nanoparticles on the heat transfer. The aim of this work is, therefore, to investigate the heat transfer performance of nanofluid using a combination of smaller size of nanoparticles with different concentrations considering the Brownian motion of nanoparticles. A horizontal pipe has been considered as a physical system within which the above mentioned nanofluid performances are investigated under transition to turbulent flow conditions. Three different types of numerical models, such as single phase model, Eulerian-Eulerian multi-phase mixture model and Eulerian-Lagrangian discrete phase model have been used while investigating the performance of nanofluids. The most commonly used model is single phase model which is based on the assumption that nanofluids behave like a conventional fluid. The other two models are used when the interaction between solid and fluid particles is considered. However, two different phases, such as fluid and solid phases is also considered in the Eulerian-Eulerian multi-phase mixture model. Thus, these phases create a fluid-solid mixture. But, two phases in the Eulerian-Lagrangian discrete phase model are independent. One of them is a solid phase and the other one is a fluid phase. In addition, RANS (Reynolds Average Navier Stokes) based Standard κ-ω and SST κ-ω transitional models have been used for the simulation of transitional flow. While the RANS based Standard κ-ϵ, Realizable κ-ϵ and RNG κ-ϵ turbulent models are used for the simulation of turbulent flow. Hydrodynamic as well as temperature behaviour of transition to turbulent flows of nanofluids through the horizontal pipe is studied under a uniform heat flux boundary condition applied to the wall with temperature dependent thermo-physical properties for both water and nanofluids. Numerical results characterising the performances of velocity and temperature fields are presented in terms of velocity and temperature contours, turbulent kinetic energy contours, surface temperature, local and average Nusselt numbers, Darcy friction factor, thermal performance factor and total entropy generation. New correlations are also proposed for the calculation of average Nusselt number for both the single and multi-phase models. Result reveals that the combination of small size of nanoparticles and higher nanoparticles concentrations with the Brownian motion of nanoparticles shows higher heat transfer enhancement and thermal performance factor than those of water. Literature suggests that the use of nanofluids flow in an inclined pipe at transition to turbulent regimes has been ignored despite its significance in real-life applications. Therefore, a particular investigation has been carried out in this thesis with a view to understand the heat transfer behaviour and performance of an inclined pipe under transition flow condition. It is found that the heat transfer rate decreases with the increase of a pipe inclination angle. Also, a higher heat transfer rate is found for a horizontal pipe under forced convection than that of an inclined pipe under mixed convection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding spatial patterns of land use and land cover is essential for studies addressing biodiversity, climate change and environmental modeling as well as for the design and monitoring of land use policies. The aim of this study was to create a detailed map of land use land cover of the deforested areas of the Brazilian Legal Amazon up to 2008. Deforestation data from and uses were mapped with Landsat-5/TM images analysed with techniques, such as linear spectral mixture model, threshold slicing and visual interpretation, aided by temporal information extracted from NDVI MODIS time series. The result is a high spatial resolution of land use and land cover map of the entire Brazilian Legal Amazon for the year 2008 and corresponding calculation of area occupied by different land use classes. The results showed that the four classes of Pasture covered 62% of the deforested areas of the Brazilian Legal Amazon, followed by Secondary Vegetation with 21%. The area occupied by Annual Agriculture covered less than 5% of deforested areas; the remaining areas were distributed among six other land use classes. The maps generated from this project ? called TerraClass - are available at INPE?s web site (http://www.inpe.br/cra/projetos_pesquisas/terraclass2008.php)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Raised blood pressure is an important risk factor for cardiovascular diseases and chronic kidney disease. We estimated worldwide trends in mean systolic and mean diastolic blood pressure, and the prevalence of, and number of people with, raised blood pressure, defined as systolic blood pressure of 140 mm Hg or higher or diastolic blood pressure of 90 mm Hg or higher. Methods: For this analysis, we pooled national, subnational, or community population-based studies that had measured blood pressure in adults aged 18 years and older. We used a Bayesian hierarchical model to estimate trends from 1975 to 2015 in mean systolic and mean diastolic blood pressure, and the prevalence of raised blood pressure for 200 countries. We calculated the contributions of changes in prevalence versus population growth and ageing to the increase in the number of adults with raised blood pressure. Findings: We pooled 1479 studies that had measured the blood pressures of 19·1 million adults. Global age-standardised mean systolic blood pressure in 2015 was 127·0 mm Hg (95% credible interval 125·7–128·3) in men and 122·3 mm Hg (121·0–123·6) in women; age-standardised mean diastolic blood pressure was 78·7 mm Hg (77·9–79·5) for men and 76·7 mm Hg (75·9–77·6) for women. Global age-standardised prevalence of raised blood pressure was 24·1% (21·4–27·1) in men and 20·1% (17·8–22·5) in women in 2015. Mean systolic and mean diastolic blood pressure decreased substantially from 1975 to 2015 in high-income western and Asia Pacific countries, moving these countries from having some of the highest worldwide blood pressure in 1975 to the lowest in 2015. Mean blood pressure also decreased in women in central and eastern Europe, Latin America and the Caribbean, and, more recently, central Asia, Middle East, and north Africa, but the estimated trends in these super-regions had larger uncertainty than in high-income super-regions. By contrast, mean blood pressure might have increased in east and southeast Asia, south Asia, Oceania, and sub-Saharan Africa. In 2015, central and eastern Europe, sub-Saharan Africa, and south Asia had the highest blood pressure levels. Prevalence of raised blood pressure decreased in high-income and some middle-income countries; it remained unchanged elsewhere. The number of adults with raised blood pressure increased from 594 million in 1975 to 1·13 billion in 2015, with the increase largely in low-income and middle-income countries. The global increase in the number of adults with raised blood pressure is a net effect of increase due to population growth and ageing, and decrease due to declining age-specific prevalence. Interpretation: During the past four decades, the highest worldwide blood pressure levels have shifted from high-income countries to low-income countries in south Asia and sub-Saharan Africa due to opposite trends, while blood pressure has been persistently high in central and eastern Europe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. Methods: We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence - defined as fasting plasma glucose of 7·0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs - in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. Findings: We used data from 751 studies including 4 372 000 adults from 146 of the 200 countries we make estimates for Global age-standardised diabetes prevalence increased from 4·3% (95% credible interval 2·4-7·0) in 1980 to 9·0% (7·2-11·1) in 2014 in men, and from 5·0% (2·9-7·9) to 7·9% (6·4-9·7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28·5% due to the rise in prevalence, 39·7% due to population growth and ageing, and 31·8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Interpretation Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults aff ected, has increased faster in low-income and middle-income countries than in high-income countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper states an introduction, description and implementation of a PV cell under the variation of parameters. Analysis and observation of a different parameters variation of a PV cell are discussed here. To obtain the model for the purpose of analyzing an equivalent circuit with the consisting parameters a photo current source, a series resistor, a shunt resistor and a diode is used. The fundamental equation of PV cell is used to study the model and to analyze and best fit observation data. The model can be used in measuring and understanding the behaviour of photovoltaic cells for certain changes in PV cell parameters. A numerical method is used to analyze the parameters sensitivity of the model to achieve the expected result and to understand the deviation of changes in different parameters situation at various conditions respectively. The ideal parameters are used to study the models behaviour. It is also compared the behaviour of current-voltage and power-voltage by comparing with produced maximum power point though it is a challenge to optimize the output with real time simulation. The whole working process is also discussed and an experimental work is also done to get the closure and insight about the produced model and to decide upon the validity of the discussed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le stelle nate nelle galassie satelliti accresciute dalla MW durante il suo processo evolutivo, sono oggi mescolate nell'Alone Retrogrado della nostra Galassia, ma mantengono una coerenza chimica e dinamica che ci consente di identificarle e di ricostruirne l'origine. Tuttavia, investigare la chimica o la dinamica in maniera indipendente non è sufficiente per fare ciò. L'associazione di stelle a specifici eventi di merging basata esclusivamente sulla loro posizione nello spazio degli IoM può non essere univoca e portare quindi ad identificazioni errate o ambigue. Allo stesso tempo, la composizione chimica delle stelle riflette la composizone del gas della galassia in cui le stelle si sono formate, ma galassie evolutesi in maniera simile sono difficilmente distinguibili nei soli piani chimici. Combinare l'informazione chimica a quella dinamica è quindi necessario per ricostruire in maniera accurata la storia di formazione ed evoluzione della MW. In questa tesi è stato analizzato un campione di 66 stelle dell'Alone Retrogrado della MW (localizzate nei dintorni solari) combinando i dati fotometrici di Gaia e quelli spettroscopici ottenuti con PEPSI@LBT. L'obiettivo principale di questo lavoro è di associare univocamente le stelle di questo campione alle rispettive galassie progenitrici tramite l'utilizzo coniugato delle informazioni chimiche e cinematiche. Per fare questo, è stata prima di tutto ricostruita l'orbita di ognuna delle stelle. In seguito, l'analisi degli spettri dei targets ha permesso di ottenere le abbondanze chimiche. L'identificazione delle sottostrutture è stata effettuata attraverso un'analisi chemo-dinamica statisticamente robusta, ottenuta tramite l'applicazione di un metodo di Gaussian Mixture Model, e l'associazione finale ai relativi progenitori, nonchè la loro interpretazione in termini di strutture indipendenti, è stata eseguita accoppiando questa informazione con la composizione chimica dettagliata di ogni stella.