161 resultados para measurement scale
Resumo:
Purpose: To evaluate the sensitivity of the perfusion parameters derived from Intravoxel Incoherent Motion (IVIM) MR imaging to hypercapnia-induced vasodilatation and hyperoxygenation-induced vasoconstriction in the human brain. Materials and Methods: This study was approved by the local ethics committee and informed consent was obtained from all participants. Images were acquired with a standard pulsed-gradient spin-echo sequence (Stejskal-Tanner) in a clinical 3-T system by using 16 b values ranging from 0 to 900 sec/mm(2). Seven healthy volunteers were examined while they inhaled four different gas mixtures known to modify brain perfusion (pure oxygen, ambient air, 5% CO(2) in ambient air, and 8% CO(2) in ambient air). Diffusion coefficient (D), pseudodiffusion coefficient (D*), perfusion fraction (f), and blood flow-related parameter (fD*) maps were calculated on the basis of the IVIM biexponential model, and the parametric maps were compared among the four different gas mixtures. Paired, one-tailed Student t tests were performed to assess for statistically significant differences. Results: Signal decay curves were biexponential in the brain parenchyma of all volunteers. When compared with inhaled ambient air, the IVIM perfusion parameters D*, f, and fD* increased as the concentration of inhaled CO(2) was increased (for the entire brain, P = .01 for f, D*, and fD* for CO(2) 5%; P = .02 for f, and P = .01 for D* and fD* for CO(2) 8%), and a trend toward a reduction was observed when participants inhaled pure oxygen (although P > .05). D remained globally stable. Conclusion: The IVIM perfusion parameters were reactive to hyperoxygenation-induced vasoconstriction and hypercapnia-induced vasodilatation. Accordingly, IVIM imaging was found to be a valid and promising method to quantify brain perfusion in humans. © RSNA, 2012.
Resumo:
AIM: To discuss the use of new ultrasonic techniques that make it possible to visualize elastic (carotid) and muscular (radial) capacitance arteries non-invasively. RESULTS OF DATA REVIEW: Measurements of carotid wall thickness and the detection of atheromas are related to arterial pressure, to other risk factors and to the risk of subsequent complications. The use of high-frequency ultrasound (7.5-10 MHz), measurements of far wall thicknesses in areas free of atheromas at end-diastole (by ECG gating or pressure waveform recording) and descriptions of the size and characteristics of atherosclerotic plaques allow a non-invasive assessment of vascular hypertrophy and atherosclerosis in hypertensive patients. CONCLUSIONS: Careful attention to methodologic and physiologic factors is needed to provide accurate information about the anatomy of the dynamically pulsating arterial tree.
Resumo:
AIM: Although acute pain is frequently reported by patients admitted to the emergency room, it is often insufficiently evaluated by physicians and is thus undertreated. With the aim of improving the care of adult patients with acute pain, we developed and implemented abbreviated clinical practice guidelines (CG) for the staff of nurses and physicians in our hospital's emergency room. METHODS: Our algorithm is based upon the practices described in the international literature and uses a simultaneous approach of treating acute pain in a rapid and efficacious manner along with diagnostic and therapeutic procedures. RESULTS: Pain was assessed using either a visual analogue scale (VAS) or a numerical rating scale (NRS) at ER admission and again during the hospital stay. Patients were treated with paracetamol and/or NSAID (VAS/NRS <4) or intravenous morphine (VAS/NRS > or =04). The algorithm also outlines a specific approach for patients with headaches to minimise the risks inherent to a non-specific treatment. In addition, our algorithm addresses the treatment of paroxysmal pain in patients with chronic pain as well as acute pain in drug addicts. It also outlines measures for pain prevention prior to minor diagnostic or therapeutic procedures. CONCLUSIONS: Based on published guidelines, an abbreviated clinical algorithm (AA) was developed and its simple format permitted a widespread implementation. In contrast to international guidelines, our algorithm favours giving nursing staff responsibility for decision making aspects of pain assessment and treatment in emergency room patients.
Resumo:
The ancient Greek medical theory based on balance or imbalance of humors disappeared in the western world, but does survive elsewhere. Is this survival related to a certain degree of health care efficiency? We explored this hypothesis through a study of classical Greco-Arab medicine in Mauritania. Modern general practitioners evaluated the safety and effectiveness of classical Arabic medicine in a Mauritanian traditional clinic, with a prognosis/follow-up method allowing the following comparisons: (i) actual patient progress (clinical outcome) compared with what the traditional 'tabib' had anticipated (= prognostic ability) and (ii) patient progress compared with what could be hoped for if the patient were treated by a modern physician in the same neighborhood. The practice appeared fairly safe and, on average, clinical outcome was similar to what could be expected with modern medicine. In some cases, patient progress was better than expected. The ability to correctly predict an individual's clinical outcome did not seem to be better along modern or Greco-Arab theories. Weekly joint meetings (modern and traditional practitioners) were spontaneously organized with a modern health centre in the neighborhood. Practitioners of a different medical system can predict patient progress. For the patient, avoiding false expectations with health care and ensuring appropriate referral may be the most important. Prognosis and outcome studies such as the one presented here may help to develop institutions where patients find support in making their choices, not only among several treatment options, but also among several medical systems.
Resumo:
OBJECTIVE: To assess the theoretical and practical knowledge of the Glasgow Coma Scale (GCS) by trained Air-rescue physicians in Switzerland. METHODS: Prospective anonymous observational study with a specially designed questionnaire. General knowledge of the GCS and its use in a clinical case were assessed. RESULTS: From 130 questionnaires send out, 103 were returned (response rate of 79.2%) and analyzed. Theoretical knowledge of the GCS was consistent for registrars, fellows, consultants and private practitioners active in physician-staffed helicopters. The clinical case was wrongly scored by 38 participants (36.9%). Wrong evaluation of the motor component occurred in 28 questionnaires (27.2%), and 19 errors were made for the verbal score (18.5%). Errors were made most frequently by registrars (47.5%, p = 0.09), followed by fellows (31.6%, p = 0.67) and private practitioners (18.4%, p = 1.00). Consultants made significantly less errors than the rest of the participating physicians (0%, p < 0.05). No statistically significant differences were shown between anesthetists, general practitioners, internal medicine trainees or others. CONCLUSION: Although the theoretical knowledge of the GCS by out-of-hospital physicians is correct, significant errors were made in scoring a clinical case. Less experienced physicians had a higher rate of errors. Further emphasis on teaching the GCS is mandatory.
Resumo:
Hypoglycemia, if recurrent, may have severe consequences on cognitive and psychomotor development of neonates. Therefore, screening for hypoglycemia is a daily routine in every facility taking care of newborn infants. Point-of-care-testing (POCT) devices are interesting for neonatal use, as their handling is easy, measurements can be performed at bedside, demanded blood volume is small and results are readily available. However, such whole blood measurements are challenged by a wide variation of hematocrit in neonates and a spectrum of normal glucose concentration at the lower end of the test range. We conducted a prospective trial to check precision and accuracy of the best suitable POCT device for neonatal use from three leading companies in Europe. Of the three devices tested (Precision Xceed, Abbott; Elite XL, Bayer; Aviva Nano, Roche), Aviva Nano exhibited the best precision. None completely fulfilled the ISO-accuracy-criteria 15197: 2003 or 2011. Aviva Nano fulfilled these criteria in 92% of cases while the others were <87%. Precision Xceed reached the 95% limit of the 2003 ISO-criteria for values ≤4.2 mmol/L, but not for the higher range (71%). Although validated for adults, new POCT devices need to be specifically evaluated on newborn infants before adopting their routine use in neonatology.
Resumo:
To date, there is no widely accepted clinical scale to monitor the evolution of depressive symptoms in demented patients. We assessed the sensitivity to treatment of a validated French version of the Health of the Nation Outcome Scale (HoNOS) 65+ compared to five routinely used scales. Thirty elderly inpatients with ICD-10 diagnosis of dementia and depression were evaluated at admission and discharge using paired t-test. Using the Brief Psychiatric Rating Scale (BPRS) "depressive mood" item as gold standard, a receiver operating characteristic curve (ROC) analysis assessed the validity of HoNOS65+F "depressive symptoms" item score changes. Unlike Geriatric Depression Scale, Mini Mental State Examination and Activities of Daily Living scores, BPRS scores decreased and Global Assessment Functioning Scale score increased significantly from admission to discharge. Amongst HoNOS65+F items, "behavioural disturbance", "depressive symptoms", "activities of daily life" and "drug management" items showed highly significant changes between the first and last day of hospitalization. The ROC analysis revealed that changes in the HoNOS65+F "depressive symptoms" item correctly classified 93% of the cases with good sensitivity (0.95) and specificity (0.88) values. These data suggest that the HoNOS65+F "depressive symptoms" item may provide a valid assessment of the evolution of depressive symptoms in demented patients.
Resumo:
Rapport de synthèse Introduction : Le Glasgow coma score (GCS) est un outil reconnu permettant l'évaluation des patients après avoir subi un traumatisme crânien. Il est réputé pour sa simplicité et sa reproductibilité permettant ainsi aux soignants une évaluation appropriée et continue du status neurologique des patients. Le GCS est composé de trois catégories évaluant la réponse oculaire, verbale et motrice. En Suisse, les soins préhospitaliers aux patients victimes d'un trauma crânien sévère sont effectués par des médecins, essdntiellement à bord des hélicoptères médicalisés. Avant une anesthésie générale nécessaire à ces patients, une évaluation du GCS est essentielle indiquant au personnel hospitalier la gravité des lésions cérébrales. Afin d'évaluer la connaissance du GCS par les médecins à bord des hélicoptères médicalisés en Suisse, nous avons élaboré un questionnaire, contenant dans une première partie des questions sur les connaissances générales du GCS suivi d'un cas clinique. Objectif : Evaluation des connaissances pratiques et théoriques du GCS par les médecins travaillant à bord des hélicoptères médicalisés en Suisse. Méthode : Etude observationnelle prospective et anonymisée à l'aide d'un questionnaire. Evaluation des connaissances générales du GCS et de son utilisation clinique lors de la présentation d'un cas. Résultats : 16 des 18 bases d'hélicoptères médicalisés suisses ont participé à notre étude. 130 questionnaires ont été envoyés et le taux de réponse a été de 79.2%. Les connaissances théoriques du GCS étaient comparables pour tous les médecins indépendamment de leur niveau de formation. Des erreurs dans l'appréciation du cas clinique étaient présentes chez 36.9% des participants. 27.2% ont commis des erreurs dans le score moteur et 18.5% dans le score verbal. Les erreurs ont été répertoriées le plus fréquemment chez les médecins assistants (47.5%, p=0.09), suivi par les chefs de clinique (31.6%, p=0.67) et les médecins installés en cabinet (18.4%, p=1.00). Les médecins cadres ont fait significativement moins d'erreurs que les autres participants (0%, p<0.05). Aucune différence significative n'à été observée entre les différentes spécialités (anesthésie, médecine interne, médecine général et «autres »). Conclusion Même si les connaissances théoriques du GCS sont adéquates parmi les médecins travaillant à bord des hélicoptères médicalisés, des erreurs dans son application clinique sont présentes dans plus d'un tiers des cas. Les médecins avec le moins d'expériences professionnelle font le plus d'erreurs. Au vu de l'importance de l'évaluation correcte du score de Glasgow initial, une amélioration des connaissances est indispensable.
Resumo:
Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.
Resumo:
This study aimed to design and validate the measurement of ankle kinetics (force, moment, and power) during consecutive gait cycles and in the field using an ambulatory system. An ambulatory system consisting of plantar pressure insole and inertial sensors (3D gyroscopes and 3D accelerometers) on foot and shank was used. To test this system, 12 patients and 10 healthy elderly subjects wore shoes embedding this system and walked many times across a gait lab including a force-plate surrounded by seven cameras considered as the reference system. Then, the participants walked two 50-meter trials where only the ambulatory system was used. Ankle force components and sagittal moment of ankle measured by ambulatory system showed correlation coefficient (R) and normalized RMS error (NRMSE) of more than 0.94 and less than 13% in comparison with the references system for both patients and healthy subjects. Transverse moment of ankle and ankle power showed R>0.85 and NRMSE<23%. These parameters also showed high repeatability (CMC>0.7). In contrast, the ankle coronal moment of ankle demonstrated high error and lower repeatability. Except for ankle coronal moment, the kinetic features obtained by the ambulatory system could distinguish the patients with ankle osteoarthritis from healthy subjects when measured in 50-meter trials. The proposed ambulatory system can be easily accessible in most clinics and could assess main ankle kinetics quantities with acceptable error and repeatability for clinical evaluations. This system is therefore suggested for field measurement in clinical applications.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Hypoglycaemia is a major cause of neonatal morbidity and may induce long-term developmental sequelae. Clinical signs of hypoglycaemia in neonatal infants are unspecific or even absent, and therefore, precise and accurate methods for the assessment of glycaemia are needed. Glycaemia measurement in newborns has some particularities like a very low limit of normal glucose concentration compared to adults and a large range of normal haematocrit values. Many bedside point-of-care testing (POCT) systems are available, but literature about their accuracy in newborn infants is scarce and not very convincing. In this retrospective study, we identified over a 1-year study period 1,324 paired glycaemia results, one obtained at bedside with one of three different POCT systems (Elite? XL, Ascensia? Contour? and ABL 735) and the other in the central laboratory of the hospital with the hexokinase reference method. All three POCT systems tended to overestimate glycaemia values, and none of them fulfilled the ISO 15197 accuracy criteria. The Elite XL appeared to be more appropriate than Contour to detect hypoglycaemia, however with a low specificity. Contour additionally showed an important inaccuracy with increasing haematocrit. The bench analyzer ABL 735 was the most accurate of the three tested POCT systems. Both of the tested handheld glucometers have important drawbacks in their use as screening tools for hypoglycaemia in newborn infants. ABL 735 could be a valuable alternative, but the blood volume needed is more than 15 times higher than for handheld glucometers. Before daily use in the newborn population, careful clinical evaluation of each new POCT system for glucose measurement is of utmost importance.
Resumo:
This article presents the post-delivery perceived stress inventory (PDPSI) and its psychometric properties. This inventory is unique in that it links the measurement of perceived stress to events experienced during and after delivery. A total of 235 French-speaking, primiparous mothers completed the PDPSI two days after their delivery. To evaluate the predictive validity of the PDPSI on anxiety and depression, participants also completed the EPDS and the STAI two days and six weeks postpartum. The exploratory analysis revealed a 16-item structure divided into five factors: F1: relationship with the child; F2: delivery; F3: fatigue after delivery; F4: breastfeeding; and F5: relationship with the caregivers. The PDPSI demonstrated good internal consistency. Moreover, confirmatory factor analysis produced excellent indices, indicating that the complexity of the PDPSI was taken into account and its fit to the sample. The discriminant analysis showed that the PDPSI was not sensitive to specific changes in the sample making the inventory generalizable to other populations. Predictive validity showed that the scale significantly predicted depression and anxiety in the early postpartum period as well as anxiety six weeks postpartum. Overall, the PDPSI showed excellent psychometric qualities, making it a useful tool for future research-evaluating interventions related to perceived stress during the postpartum period.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.