851 resultados para Correlation based analysis
Resumo:
Renal denervation can reduce blood pressure in patients with uncontrolled hypertension. The adherence to prescribed antihypertensive medication following renal denervation is unknown. This study investigated adherence to prescribed antihypertensive treatment by liquid chromatography-high resolution tandem mass spectrometry in plasma and urine at baseline and 6 months after renal denervation in 100 patients with resistant hypertension, defined as baseline office systolic blood pressure ≥140 mmHg despite treatment with ≥3 antihypertensive agents. At baseline, complete adherence to all prescribed antihypertensive agents was observed in 52 patients, 46 patients were partially adherent, and two patients were completely non-adherent. Baseline office blood pressure was 167/88 ± 19/16 mmHg with a corresponding 24-h blood pressure of 154/86 ± 15/13 mmHg. Renal denervation significantly reduced office and ambulatory blood pressure at 6-month follow-up by 15/5 mmHg (p < 0.001/p < 0.001) and 8/4 mmHg (p < 0.001/p = 0.001), respectively. Mean adherence to prescribed treatment was significantly reduced from 85.0 % at baseline to 80.7 %, 6 months after renal denervation (p = 0.005). The blood pressure decrease was not explained by improvements in adherence following the procedure. Patients not responding to treatment significantly reduced their drug intake following the procedure. Adherence was highest for angiotensin-converting enzyme inhibitors/angiotensin receptor blockers and beta blockers (>90 %) and lowest for vasodilators (21 %). In conclusion, renal denervation can reduce office and ambulatory blood pressure in patients with resistant hypertension despite a significant reduction in adherence to antihypertensive treatment after 6 months.
Resumo:
Connectivity analysis on diffusion MRI data of the whole- brain suffers from distortions caused by the standard echo- planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a âeurooetheoretically correctâeuro and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. We used data from 751 studies including 4,372,000 adults from 146 of the 200 countries we make estimates for. Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries. Wellcome Trust.
Resumo:
Abstract Objective: To analyze the prevalence of anatomical variations of celiac arterial trunk (CAT) branches and hepatic arterial system (HAS), as well as the CAT diameter, length and distance to the superior mesenteric artery. Materials and Methods: Retrospective, cross-sectional and predominantly descriptive study based on the analysis of multidetector computed tomography images of 60 patients. Results: The celiac trunk anatomy was normal in 90% of cases. Hepatosplenic trunk was found in 8.3% of patients, and hepatogastric trunk in 1.7%. Variation of the HAS was observed in 21.7% of cases, including anomalous location of the right hepatic artery in 8.3% of cases, and of the left hepatic artery, in 5%. Also, cases of joint relocation of right and left hepatic arteries, and trifurcation of the proper hepatic artery were observed, respectively, in 3 (5%) and 2 (3.3%) patients. Mean length and caliber of the CAT were 2.3 cm and 0.8 cm, respectively. Mean distance between CAT and superior mesenteric artery was 1.2 cm (standard deviation = 4.08). A significant correlation was observed between CAT diameter and length, and CAT diameter and distance to superior mesenteric artery. Conclusion: The pattern of CAT variations and diameter corroborate the majority of the literature data. However, this does not happen in relation to the HAS.
Resumo:
BACKGROUND: Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. METHODS: We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m(2) [underweight], 18·5 kg/m(2) to <20 kg/m(2), 20 kg/m(2) to <25 kg/m(2), 25 kg/m(2) to <30 kg/m(2), 30 kg/m(2) to <35 kg/m(2), 35 kg/m(2) to <40 kg/m(2), ≥40 kg/m(2) [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. FINDINGS: We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m(2) (95% credible interval 21·3-22·1) in 1975 to 24·2 kg/m(2) (24·0-24·4) in 2014 in men, and from 22·1 kg/m(2) (21·7-22·5) in 1975 to 24·4 kg/m(2) (24·2-24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m(2) in central Africa and south Asia to 29·2 kg/m(2) (28·6-29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m(2) (21·4-22·3) in south Asia to 32·2 kg/m(2) (31·5-32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5-17·4) to 8·8% (7·4-10·3) in men and from 14·6% (11·6-17·9) to 9·7% (8·3-11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8-29·2) in men and 24·0% (18·9-29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4-4·1) in 1975 to 10·8% (9·7-12·0) in 2014 in men, and from 6·4% (5·1-7·8) to 14·9% (13·6-16·1) in women. 2·3% (2·0-2·7) of the world's men and 5·0% (4·4-5·6) of women were severely obese (ie, have BMI ≥35 kg/m(2)). Globally, prevalence of morbid obesity was 0·64% (0·46-0·86) in men and 1·6% (1·3-1·9) in women. INTERPRETATION: If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia. FUNDING: Wellcome Trust, Grand Challenges Canada.
Resumo:
Background: The DNA repair protein O6-Methylguanine-DNA methyltransferase (MGMT) confers resistance to alkylating agents. Several methods have been applied to its analysis, with methylation-specific polymerase chain reaction (MSP) the most commonly used for promoter methylation study, while immunohistochemistry (IHC) has become the most frequently used for the detection of MGMT protein expression. Agreement on the best and most reliable technique for evaluating MGMT status remains unsettled. The aim of this study was to perform a systematic review and meta-analysis of the correlation between IHC and MSP. Methods A computer-aided search of MEDLINE (1950-October 2009), EBSCO (1966-October 2009) and EMBASE (1974-October 2009) was performed for relevant publications. Studies meeting inclusion criteria were those comparing MGMT protein expression by IHC with MGMT promoter methylation by MSP in the same cohort of patients. Methodological quality was assessed by using the QUADAS and STARD instruments. Previously published guidelines were followed for meta-analysis performance. Results Of 254 studies identified as eligible for full-text review, 52 (20.5%) met the inclusion criteria. The review showed that results of MGMT protein expression by IHC are not in close agreement with those obtained with MSP. Moreover, type of tumour (primary brain tumour vs others) was an independent covariate of accuracy estimates in the meta-regression analysis beyond the cut-off value. Conclusions Protein expression assessed by IHC alone fails to reflect the promoter methylation status of MGMT. Thus, in attempts at clinical diagnosis the two methods seem to select different groups of patients and should not be used interchangeably.
Resumo:
Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.
Resumo:
Alzheimer׳s disease (AD) is the most common type of dementia among the elderly. This work is part of a larger study that aims to identify novel technologies and biomarkers or features for the early detection of AD and its degree of severity. The diagnosis is made by analyzing several biomarkers and conducting a variety of tests (although only a post-mortem examination of the patients’ brain tissue is considered to provide definitive confirmation). Non-invasive intelligent diagnosis techniques would be a very valuable diagnostic aid. This paper concerns the Automatic Analysis of Emotional Response (AAER) in spontaneous speech based on classical and new emotional speech features: Emotional Temperature (ET) and fractal dimension (FD). This is a pre-clinical study aiming to validate tests and biomarkers for future diagnostic use. The method has the great advantage of being non-invasive, low cost, and without any side effects. The AAER shows very promising results for the definition of features useful in the early diagnosis of AD.
Resumo:
Peer-reviewed
Resumo:
This contribution analyzes the evolution of perception of certain natural hazards over the past 25 years in a Mediterranean region. Articles from newspapers have been used as indicator. To this end a specific Spanish journal has been considered and an ACCESS database has been created with the summarized information from each news item. The database includes data such as the location of each specific article in the newspaper, its length, the number of pictures and figures, the headlines and a summary of the published information, including all the instrumental data. The study focused on hydrometeorological extremes, mainly floods and droughts, in the northeast of the Iberian Peninsula. The number of headlines per event, trends and other data have been analyzed and compared with "measured" information, in order to identify any bias that could lead to an erroneous perception of the phenomenon. The SPI index (a drought index based on standardized accumulated precipitation) has been calculated for the entire region, and has been used for the drought analysis, while a geodatabase implemented on a GIS built for all the floods recorded in Catalonia since 1900 (INUNGAMA) has been used to analyze flood evolution. Results from a questionnaire about the impact of natural hazards in two specific places have been also used to discuss the various perceptions between rural and urban settings. Results show a better correlation between the news about drought or water scarcity and SPI than between news on floods in Catalonia and the INUNGAMA database. A positive trend has been found for non-catastrophic floods, which is explained by decrease of the perception thresholds, the increase of population density in the most flood-prone areas and changes in land use.
Resumo:
The counteranion exchange of quaternary 1,2,3-triazolium salts was examined using a simple method that permitted halide ions to be swap for a variety of anions using an anion exchange resin (A¯ form). The method was applied to 1,2,3-triazolium-based ionic liquids and the iodideto- anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Additionally, an anion exchange resin (N3¯ form) was used to obtain the benzyl azide from benzyl halide under mild reaction. Likewise, following a similar protocol, bis(azidomethyl)arenes were also synthesized in excellent yields. The results of a proton NMR spectroscopic study of simple azolium-based ion pairs are discussed, with attention focused on the significance of the charged-assisted (CH)+···anion hydrogen bonds of simple azolium systems such as 1-butyl-3-methylimidazolium and 1-benzyl-3-methyl-1,2,3-triazolium salts.
Resumo:
The counteranion exchange of quaternary 1,2,3-triazolium salts was examined using a simple method that permitted halide ions to be swap for a variety of anions using an anion exchange resin (A¯ form). The method was applied to 1,2,3-triazolium-based ionic liquids and the iodideto- anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Additionally, an anion exchange resin (N3¯ form) was used to obtain the benzyl azide from benzyl halide under mild reaction. Likewise, following a similar protocol, bis(azidomethyl)arenes were also synthesized in excellent yields. The results of a proton NMR spectroscopic study of simple azolium-based ion pairs are discussed, with attention focused on the significance of the charged-assisted (CH)+···anion hydrogen bonds of simple azolium systems such as 1-butyl-3-methylimidazolium and 1-benzyl-3-methyl-1,2,3-triazolium salts.
Resumo:
The counteranion exchange of quaternary 1,2,3-triazolium salts was examined using a simple method that permitted halide ions to be swap for a variety of anions using an anion exchange resin (A¯ form). The method was applied to 1,2,3-triazolium-based ionic liquids and the iodideto- anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Additionally, an anion exchange resin (N3¯ form) was used to obtain the benzyl azide from benzyl halide under mild reaction. Likewise, following a similar protocol, bis(azidomethyl)arenes were also synthesized in excellent yields. The results of a proton NMR spectroscopic study of simple azolium-based ion pairs are discussed, with attention focused on the significance of the charged-assisted (CH)+···anion hydrogen bonds of simple azolium systems such as 1-butyl-3-methylimidazolium and 1-benzyl-3-methyl-1,2,3-triazolium salts.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.