235 resultados para Assessment methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: For free-breathing cardiovascular magnetic resonance (CMR), the self-navigation technique recently emerged, which is expected to deliver high-quality data with a high success rate. The purpose of this study was to test the hypothesis that self-navigated 3D-CMR enables the reliable assessment of cardiovascular anatomy in patients with congenital heart disease (CHD) and to define factors that affect image quality. METHODS: CHD patients ≥2 years-old and referred for CMR for initial assessment or for a follow-up study were included to undergo a free-breathing self-navigated 3D CMR at 1.5T. Performance criteria were: correct description of cardiac segmental anatomy, overall image quality, coronary artery visibility, and reproducibility of great vessels diameter measurements. Factors associated with insufficient image quality were identified using multivariate logistic regression. RESULTS: Self-navigated CMR was performed in 105 patients (55% male, 23 ± 12y). Correct segmental description was achieved in 93% and 96% for observer 1 and 2, respectively. Diagnostic quality was obtained in 90% of examinations, and it increased to 94% if contrast-enhanced. Left anterior descending, circumflex, and right coronary arteries were visualized in 93%, 87% and 98%, respectively. Younger age, higher heart rate, lower ejection fraction, and lack of contrast medium were independently associated with reduced image quality. However, a similar rate of diagnostic image quality was obtained in children and adults. CONCLUSION: In patients with CHD, self-navigated free-breathing CMR provides high-resolution 3D visualization of the heart and great vessels with excellent robustness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we show how the use of state-of-the-art methods in computer science based on machine perception and learning allows the unobtrusive capture and automated analysis of interpersonal behavior in real time (social sensing). Given the high ecological validity of the behavioral sensing, the ease of behavioral-cue extraction for large groups over long observation periods in the field, the possibility of investigating completely new research questions, and the ability to provide people with immediate feedback on behavior, social sensing will fundamentally impact psychology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human biomonitoring (HBM) is an effective tool for assessing actual exposure to chemicals that takes into account all routes of intake. Although hair analysis is considered to be an optimal biomarker for assessing mercury exposure, the lack of harmonization as regards sampling and analytical procedures has often limited the comparison of data at national and international level. The European-funded projects COPHES and DEMOCOPHES developed and tested a harmonized European approach to Human Biomonitoring in response to the European Environment and Health Action Plan. Herein we describe the quality assurance program (QAP) for assessing mercury levels in hair samples from more than 1800 mother-child pairs recruited in 17 European countries. To ensure the comparability of the results, standard operating procedures (SOPs) for sampling and for mercury analysis were drafted and distributed to participating laboratories. Training sessions were organized for field workers and four external quality-assessment exercises (ICI/EQUAS), followed by the corresponding web conferences, were organized between March 2011 and February 2012. ICI/EQUAS used native hair samples at two mercury concentration ranges (0.20-0.71 and 0.80-1.63) per exercise. The results revealed relative standard deviations of 7.87-13.55% and 4.04-11.31% for the low and high mercury concentration ranges, respectively. A total of 16 out of 18 participating laboratories the QAP requirements and were allowed to analyze samples from the DEMOCOPHES pilot study. Web conferences after each ICI/EQUAS revealed this to be a new and effective tool for improving analytical performance and increasing capacity building. The procedure developed and tested in COPHES/DEMOCOPHES would be optimal for application on a global scale as regards implementation of the Minamata Convention on Mercury.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of qualitative research methods has grown substantially over the last twenty years, both in social sciences and, more recently, in the health sciences. This growth came with questions on the quality criteria needed to evaluate this work, and numerous guidelines were published. The latters include many discrepancies though, both in their vocabulary and construction. Many expert evaluators decry the absence of consensual and reliable evaluation tools. The authors present the results of an evaluation of 58 existing guidelines in 4 major health science fields (medicine and epidemiology; nursing and health education; social sciences and public health; psychology / psychiatry, research methods and organization) by expert users (article reviewers, experts allocating funds, editors, etc.). The results propose a toolbox containing 12 consensual criteria with the definitions given by expert users. They also indicate in which disciplinary field each type of criteria is known to be more or less essential. Nevertheless, the authors highlight the limitations of the criteria comparability, as soon as one focuses on their specific definitions. They conclude that each criterion in the toolbox must be explained to come to broader consensus and identify definitions that are consensual to all the fields examined and easily operational.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Two important risk factors for abnormal neurodevelopment are preterm birth and neonatal hypoxic ischemic encephalopathy. The new revisions of Griffiths Mental Development Scale (Griffiths-II, [1996]) and the Bayley Scales of Infant Development (BSID-II, [1993]) are two of the most frequently used developmental diagnostics tests. The Griffiths-II is divided into five subscales and a global development quotient (QD), and the BSID-II is divided into two scales, the Mental scale (MDI) and the Psychomotor scale (PDI). The main objective of this research was to establish the extent to which developmental diagnoses obtained using the new revisions of these two tests are comparable for a given child. MATERIAL AND METHODS: Retrospective study of 18-months-old high-risk children examined with both tests in the follow-up Unit of the Clinic of Neonatology of our tertiary care university Hospital between 2011 and 2012. To determine the concurrent validity of the two tests paired t-tests and Pearson product-moment correlation coefficients were computed. Using the BSID-II as a gold standard, the performance of the Griffiths-II was analyzed with receiver operating curves. RESULTS: 61 patients (80.3% preterm, 14.7% neonatal asphyxia) were examined. For the BSID-II the MDI mean was 96.21 (range 67-133) and the PDI mean was 87.72 (range 49-114). For the Griffiths-II, the QD mean was 96.95 (range 60-124), the locomotors subscale mean was 92.57 (range 49-119). The score of the Griffiths locomotors subscale was significantly higher than the PDI (p<0.001). Between the Griffiths-II QD and the BSID-II MDI no significant difference was found, and the area under the curve was 0.93, showing good validity. All correlations were high and significant with a Pearson product-moment correlation coefficient >0.8. CONCLUSIONS: The meaning of the results for a given child was the same for the two tests. Two scores were interchangeable, the Griffiths-II QD and the BSID-II MDI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Iterative algorithms introduce new challenges in the field of image quality assessment. The purpose of this study is to use a mathematical model to evaluate objectively the low contrast detectability in CT. MATERIALS AND METHODS: A QRM 401 phantom containing 5 and 8 mm diameter spheres with a contrast level of 10 and 20 HU was used. The images were acquired at 120 kV with CTDIvol equal to 5, 10, 15, 20 mGy and reconstructed using the filtered back-projection (FBP), adaptive statistical iterative reconstruction 50% (ASIR 50%) and model-based iterative reconstruction (MBIR) algorithms. The model observer used is the Channelized Hotelling Observer (CHO). The channels are dense difference of Gaussian channels (D-DOG). The CHO performances were compared to the outcomes of six human observers having performed four alternative forced choice (4-AFC) tests. RESULTS: For the same CTDIvol level and according to CHO model, the MBIR algorithm gives the higher detectability index. The outcomes of human observers and results of CHO are highly correlated whatever the dose levels, the signals considered and the algorithms used when some noise is added to the CHO model. The Pearson coefficient between the human observers and the CHO is 0.93 for FBP and 0.98 for MBIR. CONCLUSION: The human observers' performances can be predicted by the CHO model. This opens the way for proposing, in parallel to the standard dose report, the level of low contrast detectability expected. The introduction of iterative reconstruction requires such an approach to ensure that dose reduction does not impair diagnostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: This article is part of a research study on the organization of primary health care (PHC) for mental health in two of Quebec's remote regions. It introduces a methodological approach based on information found in health records, for assessing the quality of PHC offered to people suffering from depression or anxiety disorders. METHODS: Quality indicators were identified from evidence and case studies were reconstructed using data collected in health records over a 2-year observation period. Data collection was developed using a three-step iterative process: (1) feasibility analysis, (2) development of a data collection tool, and (3) application of the data collection method. The adaptation of quality-of-care indicators to remote regions was appraised according to their relevance, measurability and construct validity in this context. RESULTS: As a result of this process, 18 quality indicators were shown to be relevant, measurable and valid for establishing a critical quality appraisal of four recommended dimensions of PHC clinical processes: recognition, assessment, treatment and follow-up. CONCLUSIONS: There is not only an interest in the use of health records to assess the quality of PHC for mental health in remote regions but also a scientific value for the rigorous and meticulous methodological approach developed in this study. From the perspective of stakeholders in the PHC system of care in remote areas, quality indicators are credible and provide potential for transferability to other contexts. This study brings information that has the potential to identify gaps in and implement solutions adapted to the context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To assess the composition and compliance with legislation of multivitamin/multiminerals (MVM) in Switzerland. Methods: Information on the composition of vitamin/minerals supplements was obtained from the Swiss drug compendium, the Internet, pharmacies, parapharmacies and supermarkets. MVM was defined as the presence of at least 5 vitamins and/or minerals. Results: 95 MVM were considered. The most frequent vitamins were B6 (73.7%), C (71.6%), B2 (69.5%) and B1 (67.4%); the least frequent were K (17.9%), biotin (51.6%), pantothene (55.8%) and E (56.8%). Around half of MVMs provided >150% of the ADI for vitamins. The most frequent minerals were zinc (66.3%), calcium (55.8%), magnesium (54.7%) and copper (48.4%), and the least frequent were fluoride (3.2%), phosphorous (17.9%) and chrome (22.1%). Only 25% of MVMs contained iodine. More than two thirds of MVMs provided between 15 and 150% of the ADI for minerals, and few MVMs provided >150% of the ADI. While few MVMs provided <15% of the ADI for vitamins, a considerable fraction did so for minerals (32.7% for magnesium, 26.1% for copper and 22.6% for calcium). Conclusion: There is a great variability regarding the composition and amount of MVMs in Switzerland. Several MVM do not comply with the Swiss legislation.