175 resultados para hierarchical rating method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Straylight gives the appearance of a veil of light thrown over a person's retinal image when there is a strong light source present. We examined the reproducibility of the measurements by C-Quant, and assessed its correlation to characteristics of the eye and subjects' age. PARTICIPANTS AND METHODS: Five repeated straylight measurements were taken using the dominant eye of 45 healthy subjects (age 21-59) with a BCVA of 20/20: 14 emmetropic, 16 myopic, eight hyperopic and seven with astigmatism. We assessed the extent of reproducibility of straylight measures using the intraclass correlation coefficient. RESULTS: The mean straylight value of all measurements was 1.01 (SD 0.23, median 0.97, interquartile range 0.85-1.1). Per 10 years of age, straylight increased in average by 0.10 (95%CI 0.04 to 0.16, p < 0.01]. We found no independent association of refraction (range -5.25 dpt to +2 dpt) on straylight values (0.001; 95%CI -0.022 to 0.024, p = 0.92). Compared to emmetropic subjects, myopia reduced straylight (-.011; -0.024 to 0.02, p = 0.11), whereas higher straylight values (0.09; -0.01 to 0.20, p = 0.09) were observed in subjects with blue irises as compared to dark-colored irises when correcting for age. The intraclass correlation coefficient (ICC) of repeated measurements was 0.83 (95%CI 0.76 to 0.90). CONCLUSIONS: Our study showed that straylight measurements with the C-Quant had a high reproducibility, i.e. a lack of large intra-observer variability, making it appropriate to be applied in long-term follow-up studies assessing the long-term effect of surgical procedures on the quality of vision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RATIONALE: The aim of the work was to develop and validate a method for the quantification of vitamin D metabolites in serum using ultra-high-pressure liquid chromatography coupled to mass spectrometry (LC/MS), and to validate a high-resolution mass spectrometry (LC/HRMS) approach against a tandem mass spectrometry (LC/MS/MS) approach using a large clinical sample set. METHODS: A fast, accurate and reliable method for the quantification of the vitamin D metabolites, 25-hydroxyvitamin D2 (25OH-D2) and 25-hydroxyvitamin D3 (25OH-D3), in human serum was developed and validated. The C3 epimer of 25OH-D3 (3-epi-25OH-D3) was also separated from 25OH-D3. The samples were rapidly prepared via a protein precipitation step followed by solid-phase extraction (SPE) using an HLB μelution plate. Quantification was performed using both LC/MS/MS and LC/HRMS systems. RESULTS: Recovery, matrix effect, inter- and intra-day reproducibility were assessed. Lower limits of quantification (LLOQs) were determined for both 25OH-D2 and 25OH-D3 for the LC/MS/MS approach (6.2 and 3.4 µg/L, respectively) and the LC/HRMS approach (2.1 and 1.7 µg/L, respectively). A Passing & Bablok fit was determined between both approaches for 25OH-D3 on 662 clinical samples (1.11 + 1.06x). It was also shown that results can be affected by the inclusion of the isomer 3-epi-25OH-D3. CONCLUSIONS: Quantification of the relevant vitamin D metabolites was successfully developed and validated here. It was shown that LC/HRMS is an accurate, powerful and easy to use approach for quantification within clinical laboratories. Finally, the results here suggest that it is important to separate 3-epi-25OH-D3 from 25OH-D3. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives. The goal of this study is to evaluate a T2-mapping sequence by: (i) measuring the reproducibility intra- and inter-observer variability in healthy volunteers in two separate scanning session with a T2 reference phantom; (2) measuring the mean T2 relaxation times by T2-mapping in infarcted myocardium in patients with subacute MI and compare it with patient's the gold standard X-ray coronary angiography and healthy volunteers results. Background. Myocardial edema is a consequence of an inflammation of the tissue, as seen in myocardial infarct (MI). It can be visualized by cardiovascular magnetic resonance (CMR) imaging using the T2 relaxation time. T2-mapping is a quantitative methodology that has the potential to address the limitation of the conventional T2-weighted (T2W) imaging. Methods. The T2-mapping protocol used for all MRI scans consisted in a radial gradient echo acquisition with a lung-liver navigator for free-breathing acquisition and affine image registration. Mid-basal short axis slices were acquired.T2-maps analyses: 2 observers semi- automatically segmented the left ventricle in 6 segments accordingly to the AHA standards. 8 healthy volunteers (age: 27 ± 4 years; 62.5% male) were scanned in 2 separate sessions. 17 patients (age : 61.9 ± 13.9 years; 82.4% male) with subacute STEMI (70.6%) and NSTEMI underwent a T2-mapping scanning session. Results. In healthy volunteers, the mean inter- and intra-observer variability over the entire short axis slice (segment 1 to 6) was 0.1 ms (95% confidence interval (CI): -0.4 to 0.5, p = 0.62) and 0.2 ms (95% CI: -2.8 to 3.2, p = 0.94, respectively. T2 relaxation time measurements with and without the correction of the phantom yielded an average difference of 3.0 ± 1.1 % and 3.1 ± 2.1 % (p = 0.828), respectively. In patients, the inter-observer variability in the entire short axis slice (S1-S6), was 0.3 ms (95% CI: -1.8 to 2.4, p = 0.85). Edema location as determined through the T2-mapping and the coronary artery occlusion as determined on X-ray coronary angiography correlated in 78.6%, but only in 60% in apical infarcts. All except one of the maximal T2 values in infarct patients were greater than the upper limit of the 95% confidence interval for normal myocardium. Conclusions. The T2-mapping methodology is accurate in detecting infarcted, i.e. edematous tissue in patients with subacute infarcts. This study further demonstrated that this T2-mapping technique is reproducible and robust enough to be used on a segmental basis for edema detection without the need of a phantom to yield a T2 correction factor. This new quantitative T2-mapping technique is promising and is likely to allow for serial follow-up studies in patients to improve our knowledge on infarct pathophysiology, on infarct healing, and for the assessment of novel treatment strategies for acute infarctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapport de synthèse : Objectif : Le but de ce travail est d`étudier l'angiographie par scanner multi-barrette (AS) dans l'évaluation de l'artériopathie oblitérante (AOMI) de l'aorte abdominale et des membres inférieurs utilisant une méthode adaptative d'acquisition pour optimiser le rehaussement artériel en particulier pour le lit artériel distal et les artères des pieds. Matériels et méthodes : Trente-quatre patients pressentant une AOMI ont bénéficié d'une angiographie trans-cathéter (ATC) et d'une AS dans un délai inférieur ou égal à 15 jours. L'AS a été effectuée du tronc coeliaque jusqu'aux artères des pieds en une seule acquisition utilisant une haute résolution spatiale (16x0.625 mm). La vitesse de table et le temps de rotation pour chaque examen ont été choisis selon le temps de transit du produit de contraste, obtenu après un bolus test. Une quantité totale de 130 ml de contraste à 4 ml/s a été utilisée. L'analyse des images de l'AS a été effectuée par deux observateurs et les données ATC ont été interprétées de manière indépendante par deux autres observateurs. L'analyse a inclus la qualité de l'image et la détection de sténose supérieure ou égale à 50 % par patient et par segment artériel. La sensibilité et la spécificité de l'AS ont été calculées en considérant l'ATC comme examen de référence. La variabilité Interobservateur a été mesurée au moyen d'une statistique de kappa. Résultas : L'ATC a été non-conclusive dans 0.7 % des segments, tandis que l'AS était conclusive dans tous les segments. Sur l'analyse par patient, la sensibilité et la spécificité totales pour détecter une sténose significative égale ou supérieure à 50 % étaient de 100 %. L'analyse par segment a montré des sensibilités et de spécificités variant respectivement de 91 à 100 % et de 81 à 100 %. L'analyse des artères distales des pieds a révélé une sensibilité de 100 % et une spécificité de 90 %. Conclusion : L'angiographie par CT multi-barrettes utilisant cette méthode adaptative d'acquisition améliore la qualité de l'image et fournit une technique non-invasive et fiable pour évaluer L'AOMI, y compris les artères distales des pieds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rare species have restricted geographic ranges, habitat specialization, and/or small population sizes. Datasets on rare species distribution usually have few observations, limited spatial accuracy and lack of valid absences; conversely they provide comprehensive views of species distributions allowing to realistically capture most of their realized environmental niche. Rare species are the most in need of predictive distribution modelling but also the most difficult to model. We refer to this contrast as the "rare species modelling paradox" and propose as a solution developing modelling approaches that deal with a sufficiently large set of predictors, ensuring that statistical models aren't overfitted. Our novel approach fulfils this condition by fitting a large number of bivariate models and averaging them with a weighted ensemble approach. We further propose that this ensemble forecasting is conducted within a hierarchic multi-scale framework. We present two ensemble models for a test species, one at regional and one at local scale, each based on the combination of 630 models. In both cases, we obtained excellent spatial projections, unusual when modelling rare species. Model results highlight, from a statistically sound approach, the effects of multiple drivers in a same modelling framework and at two distinct scales. From this added information, regional models can support accurate forecasts of range dynamics under climate change scenarios, whereas local models allow the assessment of isolated or synergistic impacts of changes in multiple predictors. This novel framework provides a baseline for adaptive conservation, management and monitoring of rare species at distinct spatial and temporal scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The Agency for Healthcare Research and Quality (AHRQ) developed Patient Safety Indicators (PSIs) for use with ICD-9-CM data. Many countries have adopted ICD-10 for coding hospital diagnoses. We conducted this study to develop an internationally harmonized ICD-10 coding algorithm for the AHRQ PSIs. Methods: The AHRQ PSI Version 2.1 has been translated into ICD-10-AM (Australian Modification), and PSI Version 3.0a has been independently translated into ICD-10-GM (German Modification). We converted these two country-specific coding algorithms into ICD-10-WHO (World Health Organization version) and combined them to form one master list. Members of an international expert panel-including physicians, professional medical coders, disease classification specialists, health services researchers, epidemiologists, and users of the PSI-independently evaluated this master list and rated each code as either "include," "exclude," or "uncertain," following the AHRQ PSI definitions. After summarizing the independent rating results, we held a face-to-face meeting to discuss codes for which there was no unanimous consensus and newly proposed codes. A modified Delphi method was employed to generate a final ICD-10 WHO coding list. Results: Of 20 PSIs, 15 that were based mainly on diagnosis codes were selected for translation. At the meeting, panelists discussed 794 codes for which consensus had not been achieved and 2,541 additional codes that were proposed by individual panelists for consideration prior to the meeting. Three documents were generated: a PSI ICD-10-WHO version-coding list, a list of issues for consideration on certain AHRQ PSIs and ICD-9-CM codes, and a recommendation to WHO to improve specification of some disease classifications. Conclusion: An ICD-10-WHO PSI coding list has been developed and structured in a manner similar to the AHRQ manual. Although face validity of the list has been ensured through a rigorous expert panel assessment, its true validity and applicability should be assessed internationally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The use of the family history method is recommended in family studies as a type of proxy interview of non-participating relatives. However, using different sources of information can result in bias as direct interviews may provide a higher likelihood of assigning diagnoses than family history reports. The aims of the present study were to: 1) compare diagnoses for threshold and subthreshold mood syndromes from interviews to those relying on information from relatives; 2) test the appropriateness of lowering the diagnostic threshold and combining multiple reports from the family history method to obtain comparable prevalence estimates to the interviews; 3) identify factors that influence the likelihood of agreement and reporting of disorders by informants. METHODS: Within a family study, 1621 informant-index subject pairs were identified. DSM-5 diagnoses from direct interviews of index subjects were compared to those derived from family history information provided by their first-degree relatives. RESULTS: 1) Inter-informant agreement was acceptable for Mania, but low for all other mood syndromes. 2) Except for Mania and subthreshold depression, the family history method provided significantly lower prevalence estimates. The gap improved for all other syndromes after lowering the threshold of the family history method. 3) Individuals who had a history of depression themselves were more likely to report depression in their relatives. LIMITATIONS: Low proportion of affected individuals for manic syndromes and lack of independence of data. CONCLUSIONS: The higher likelihood of reporting disorders by affected informants entails the risk of overestimation of the size of familial aggregation of depression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of observer-rated scales requires that raters be trained until they have become reliable in using the scales. However, few studies properly report how training in using a given rating scale is conducted or indeed how it should be conducted. This study examined progress in interrater reliability over 6 months of training with two observer-rated scales, the Cognitive Errors Rating Scale and the Coping Action Patterns Rating Scale. The evolution of the intraclass correlation coefficients was modeled using hierarchical linear modeling. Results showed an overall training effect as well as effects of the basic training phase and of the rater calibration phase, the latter being smaller than the former. The results are discussed in terms of implications for rater training in psychotherapy research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) experiments are widely used to determine, within entire genomes, the occupancy sites of any protein of interest, including, for example, transcription factors, RNA polymerases, or histones with or without various modifications. In addition to allowing the determination of occupancy sites within one cell type and under one condition, this method allows, in principle, the establishment and comparison of occupancy maps in various cell types, tissues, and conditions. Such comparisons require, however, that samples be normalized. Widely used normalization methods that include a quantile normalization step perform well when factor occupancy varies at a subset of sites, but may miss uniform genome-wide increases or decreases in site occupancy. We describe a spike adjustment procedure (SAP) that, unlike commonly used normalization methods intervening at the analysis stage, entails an experimental step prior to immunoprecipitation. A constant, low amount from a single batch of chromatin of a foreign genome is added to the experimental chromatin. This "spike" chromatin then serves as an internal control to which the experimental signals can be adjusted. We show that the method improves similarity between replicates and reveals biological differences including global and largely uniform changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diagnosis of several neurological disorders is based on the detection of typical pathological patterns in the electroencephalogram (EEG). This is a time-consuming task requiring significant training and experience. Automatic detection of these EEG patterns would greatly assist in quantitative analysis and interpretation. We present a method, which allows automatic detection of epileptiform events and discrimination of them from eye blinks, and is based on features derived using a novel application of independent component analysis. The algorithm was trained and cross validated using seven EEGs with epileptiform activity. For epileptiform events with compensation for eyeblinks, the sensitivity was 65 +/- 22% at a specificity of 86 +/- 7% (mean +/- SD). With feature extraction by PCA or classification of raw data, specificity reduced to 76 and 74%, respectively, for the same sensitivity. On exactly the same data, the commercially available software Reveal had a maximum sensitivity of 30% and concurrent specificity of 77%. Our algorithm performed well at detecting epileptiform events in this preliminary test and offers a flexible tool that is intended to be generalized to the simultaneous classification of many waveforms in the EEG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Platelet-rich plasma (PRP) is a volume of plasma fraction of autologous blood having platelet concentrations above baseline whole-blood values due to processing and concentration. PRP is used in various surgical fields to enhance soft-tissue and bone healing by delivering supra-physiological concentrations of autologous platelets at the site of tissue damage. These preparations may provide a good cellular source of various growth factors and cytokines, and modulate tissue response to injury. Common clinically available materials for blood preparations combined with a two-step centrifugation protocol at 280g each, to ensure cellular component integrity, provided platelet preparations which were concentrated 2-3 fold over total blood values. Costs were shown to be lower than those of other methods which require specific equipment and high-cost disposables, while safety and traceability can be increased. PRP can be used for the treatment of wounds of all types including burns and also of split-thickness skin graft donor sites, which are frequently used in burn management. The procedure can be standardized and is easy to adapt in clinical settings with minimal infrastructure, thus enabling large numbers of patients to benefit from a form of cellular therapy.