161 resultados para 7-point Subjective Global Assessment


Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: Evaluation of glomerular hyperfiltration (GH) is difficult; the variable reported definitions impede comparisons between studies. A clear and universal definition of GH would help in comparing results of trials aimed at reducing GH. This study assessed how GH is measured and defined in the literature. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS: Three databases (Embase, MEDLINE, CINAHL) were systematically searched using the terms "hyperfiltration" or "glomerular hyperfiltration". All studies reporting a GH threshold or studying the effect of a high GFR in a continuous manner against another outcome of interest were included. RESULTS: The literature search was performed from November 2012 to February 2013 and updated in August 2014. From 2013 retrieved studies, 405 studies were included. Threshold use to define GH was reported in 55.6% of studies. Of these, 88.4% used a single threshold and 11.6% used numerous thresholds adapted to participant sex or age. In 29.8% of the studies, the choice of a GH threshold was not based on a control group or literature references. After 2004, the use of GH threshold use increased (P<0.001), but the use of a control group to precisely define that GH threshold decreased significantly (P<0.001); the threshold did not differ among pediatric, adult, or mixed-age studies. The GH threshold ranged from 90.7 to 175 ml/min per 1.73 m(2) (median, 135 ml/min per 1.73 m(2)). CONCLUSION: Thirty percent of studies did not justify the choice of threshold values. The decrease of GFR in the elderly was rarely considered in defining GH. From a methodologic point of view, an age- and sex-matched control group should be used to define a GH threshold.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated whether a single blood measurement using the minimally invasive technique of a finger prick to draw a blood sample of 5 µl (to yield a dried blood spot (DBS)) is suitable for the assessment of flurbiprofen (FLB) metabolic ratio (MR). Ten healthy volunteers who had been genotyped for CYP2C9 were recruited as subjects. They received FLB alone in session 1 and FLB with fluconazole in session 2. In session 3, the subjects were pretreated for 4 days with rifampicin and received FLB with the last dose of rifampicin on day 5. Plasma and DBS samples were obtained between 0 and 8 h after FLB administration, and urine was collected during the 8 h after administration. The pharmacokinetic profiles of the drugs were comparable in DBS and plasma. FLB's apparent clearance values decreased by 35% in plasma and DBS during session 2 and increased by 75% in plasma and by 30% in DBS during session 3. Good correlations were observed between MRs calculated from urine, plasma, and DBS samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: To evaluate gadocoletic acid (B-22956), a gadolinium-based paramagnetic blood pool agent, for contrast-enhanced coronary magnetic resonance angiography (MRA) in a Phase I clinical trial, and to compare the findings with those obtained using a standard noncontrast T2 preparation sequence. MATERIALS AND METHODS: The left coronary system was imaged in 12 healthy volunteers before B-22956 application and 5 (N = 11) and 45 (N = 7) minutes after application of 0.075 mmol/kg of body weight (BW) of B-22956. Additionally, imaging of the right coronary system was performed 23 minutes after B-22956 application (N = 6). A three-dimensional gradient echo sequence with T2 preparation (precontrast) or inversion recovery (IR) pulse (postcontrast) with real-time navigator correction was used. Assessment of the left and right coronary systems was performed qualitatively (a 4-point visual score for image quality) and quantitatively in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, visible vessel length, maximal luminal diameter, and the number of visible side branches. RESULTS: Significant (P < 0.01) increases in SNR (+42%) and CNR (+86%) were noted five minutes after B-22956 application, compared to precontrast T2 preparation values. A significant increase in CNR (+40%, P < 0.05) was also noted 45 minutes postcontrast. Vessels (left anterior descending artery (LAD), left coronary circumflex (LCx), and right coronary artery (RCA)) were also significantly (P < 0.05) sharper on postcontrast images. Significant increases in vessel length were noted for the LAD (P < 0.05) and LCx and RCA (both P < 0.01), while significantly more side branches were noted for the LAD and RCA (both P < 0.05) when compared to precontrast T2 preparation values. CONCLUSION: The use of the intravascular contrast agent B-22956 substantially improves both objective and subjective parameters of image quality on high-resolution three-dimensional coronary MRA. The increase in SNR, CNR, and vessel sharpness minimizes current limitations of coronary artery visualization with high-resolution coronary MRA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pseudoachondroplasia (PSACH) and multiple epiphyseal dysplasia (MED) are relatively common skeletal dysplasias resulting in short-limbed dwarfism, joint pain, and stiffness. PSACH and the largest proportion of autosomal dominant MED (AD-MED) results from mutations in cartilage oligomeric matrix protein (COMP); however, AD-MED is genetically heterogenous and can also result from mutations in matrilin-3 (MATN3) and type IX collagen (COL9A1, COL9A2, and COL9A3). In contrast, autosomal recessive MED (rMED) appears to result exclusively from mutations in sulphate transporter solute carrier family 26 (SLC26A2). The diagnosis of PSACH and MED can be difficult for the nonexpert due to various complications and similarities with other related diseases and often mutation analysis is requested to either confirm or exclude the diagnosis. Since 2003, the European Skeletal Dysplasia Network (ESDN) has used an on-line review system to efficiently diagnose cases referred to the network prior to mutation analysis. In this study, we present the molecular findings in 130 patients referred to ESDN, which includes the identification of novel and recurrent mutations in over 100 patients. Furthermore, this study provides the first indication of the relative contribution of each gene and confirms that they account for the majority of PSACH and MED.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human biomonitoring (HBM) is an effective tool for assessing actual exposure to chemicals that takes into account all routes of intake. Although hair analysis is considered to be an optimal biomarker for assessing mercury exposure, the lack of harmonization as regards sampling and analytical procedures has often limited the comparison of data at national and international level. The European-funded projects COPHES and DEMOCOPHES developed and tested a harmonized European approach to Human Biomonitoring in response to the European Environment and Health Action Plan. Herein we describe the quality assurance program (QAP) for assessing mercury levels in hair samples from more than 1800 mother-child pairs recruited in 17 European countries. To ensure the comparability of the results, standard operating procedures (SOPs) for sampling and for mercury analysis were drafted and distributed to participating laboratories. Training sessions were organized for field workers and four external quality-assessment exercises (ICI/EQUAS), followed by the corresponding web conferences, were organized between March 2011 and February 2012. ICI/EQUAS used native hair samples at two mercury concentration ranges (0.20-0.71 and 0.80-1.63) per exercise. The results revealed relative standard deviations of 7.87-13.55% and 4.04-11.31% for the low and high mercury concentration ranges, respectively. A total of 16 out of 18 participating laboratories the QAP requirements and were allowed to analyze samples from the DEMOCOPHES pilot study. Web conferences after each ICI/EQUAS revealed this to be a new and effective tool for improving analytical performance and increasing capacity building. The procedure developed and tested in COPHES/DEMOCOPHES would be optimal for application on a global scale as regards implementation of the Minamata Convention on Mercury.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

INTRODUCTION: Two important risk factors for abnormal neurodevelopment are preterm birth and neonatal hypoxic ischemic encephalopathy. The new revisions of Griffiths Mental Development Scale (Griffiths-II, [1996]) and the Bayley Scales of Infant Development (BSID-II, [1993]) are two of the most frequently used developmental diagnostics tests. The Griffiths-II is divided into five subscales and a global development quotient (QD), and the BSID-II is divided into two scales, the Mental scale (MDI) and the Psychomotor scale (PDI). The main objective of this research was to establish the extent to which developmental diagnoses obtained using the new revisions of these two tests are comparable for a given child. MATERIAL AND METHODS: Retrospective study of 18-months-old high-risk children examined with both tests in the follow-up Unit of the Clinic of Neonatology of our tertiary care university Hospital between 2011 and 2012. To determine the concurrent validity of the two tests paired t-tests and Pearson product-moment correlation coefficients were computed. Using the BSID-II as a gold standard, the performance of the Griffiths-II was analyzed with receiver operating curves. RESULTS: 61 patients (80.3% preterm, 14.7% neonatal asphyxia) were examined. For the BSID-II the MDI mean was 96.21 (range 67-133) and the PDI mean was 87.72 (range 49-114). For the Griffiths-II, the QD mean was 96.95 (range 60-124), the locomotors subscale mean was 92.57 (range 49-119). The score of the Griffiths locomotors subscale was significantly higher than the PDI (p<0.001). Between the Griffiths-II QD and the BSID-II MDI no significant difference was found, and the area under the curve was 0.93, showing good validity. All correlations were high and significant with a Pearson product-moment correlation coefficient >0.8. CONCLUSIONS: The meaning of the results for a given child was the same for the two tests. Two scores were interchangeable, the Griffiths-II QD and the BSID-II MDI.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: Due to the high prevalence of renal failure in transcatheter aortic valve replacement (TAVR) candidates, a non-contrast MR technique is desirable for pre-procedural planning. We sought to evaluate the feasibility of a novel, non-contrast, free-breathing, self-navigated three-dimensional (SN3D) MR sequence for imaging the aorta from its root to the iliofemoral run-off in comparison to non-contrast two-dimensional-balanced steady-state free-precession (2D-bSSFP) imaging. METHODS: SN3D [field of view (FOV), 220-370 mm(3); slice thickness, 1.15 mm; repetition/echo time (TR/TE), 3.1/1.5 ms; and flip angle, 115°] and 2D-bSSFP acquisitions (FOV, 340 mm; slice thickness, 6 mm; TR/TE, 2.3/1.1 ms; flip angle, 77°) were performed in 10 healthy subjects (all male; mean age, 30.3 ± 4.3 yrs) using a 1.5-T MRI system. Aortic root measurements and qualitative image ratings (four-point Likert-scale) were compared. RESULTS: The mean effective aortic annulus diameter was similar for 2D-bSSFP and SN3D (26.7 ± 0.7 vs. 26.1 ± 0.9 mm, p = 0.23). The mean image quality of 2D-bSSFP (4; IQR 3-4) was rated slightly higher (p = 0.03) than SN3D (3; IQR 2-4). The mean total acquisition time for SN3D imaging was 12.8 ± 2.4 min. CONCLUSIONS: Our results suggest that a novel SN3D sequence allows rapid, free-breathing assessment of the aortic root and the aortoiliofemoral system without administration of contrast medium. KEY POINTS: • The prevalence of renal failure is high among TAVR candidates. • Non-contrast 3D MR angiography allows for TAVR procedure planning. • The self-navigated sequence provides a significantly reduced scanning time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate a diagnostic strategy for pulmonary embolism that combined clinical assessment, plasma D-dimer measurement, lower limb venous ultrasonography, and helical computed tomography (CT). METHODS: A cohort of 965 consecutive patients presenting to the emergency departments of three general and teaching hospitals with clinically suspected pulmonary embolism underwent sequential noninvasive testing. Clinical probability was assessed by a prediction rule combined with implicit judgment. All patients were followed for 3 months. RESULTS: A normal D-dimer level (&lt;500 microg/L by a rapid enzyme-linked immunosorbent assay) ruled out venous thromboembolism in 280 patients (29%), and finding a deep vein thrombosis by ultrasonography established the diagnosis in 92 patients (9.5%). Helical CT was required in only 593 patients (61%) and showed pulmonary embolism in 124 patients (12.8%). Pulmonary embolism was considered ruled out in the 450 patients (46.6%) with a negative ultrasound and CT scan and a low-to-intermediate clinical probability. The 8 patients with a negative ultrasound and CT scan despite a high clinical probability proceeded to pulmonary angiography (positive: 2; negative: 6). Helical CT was inconclusive in 11 patients (pulmonary embolism: 4; no pulmonary embolism: 7). The overall prevalence of pulmonary embolism was 23%. Patients classified as not having pulmonary embolism were not anticoagulated during follow-up and had a 3-month thromboembolic risk of 1.0% (95% confidence interval: 0.5% to 2.1%). CONCLUSION: A noninvasive diagnostic strategy combining clinical assessment, D-dimer measurement, ultrasonography, and helical CT yielded a diagnosis in 99% of outpatients suspected of pulmonary embolism, and appeared to be safe, provided that CT was combined with ultrasonography to rule out the disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The number of nonagenarians and centenarians is rising dramatically, and many of them live in nursing homes. Very little is known about psychiatric symptoms and cognitive abilities other than memory in this population. This exploratory study focuses on anosognosia and its relationship with common psychiatric and cognitive symptoms. METHODS: Fifty-eight subjects aged 90 years or older were recruited from geriatric nursing homes and divided into five groups according to Mini-Mental State Examination scores. Assessment included the five-word test, executive clock-drawing task, lexical and categorical fluencies, Anosognosia Questionnaire-Dementia, Neuropsychiatric Inventory, and Charlson Comorbidity Index. RESULTS: Subjects had moderate cognitive impairment, with mean ± SD Mini-Mental State Examination being 15.41 ± 7.04. Anosognosia increased with cognitive impairment and was associated with all cognitive domains, as well as with apathy and agitation. Subjects with mild global cognitive decline seemed less anosognosic than subjects with the least or no impairment. Neither anosognosia nor psychopathological features were related to physical conditions. CONCLUSIONS: Anosognosia in oldest-old nursing home residents was mostly mild. It was associated with both cognitive and psychopathological changes, but whether anosognosia is causal to the observed psychopathological features requires further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: A possible strategy for increasing smoking cessation rates could be to provide smokers who have contact with healthcare systems with feedback on the biomedical or potential future effects of smoking, e.g. measurement of exhaled carbon monoxide (CO), lung function, or genetic susceptibility to lung cancer. OBJECTIVES: To determine the efficacy of biomedical risk assessment provided in addition to various levels of counselling, as a contributing aid to smoking cessation. SEARCH STRATEGY: We systematically searched the Cochrane Collaboration Tobacco Addiction Group Specialized Register, Cochrane Central Register of Controlled Trials 2008 Issue 4, MEDLINE (1966 to January 2009), and EMBASE (1980 to January 2009). We combined methodological terms with terms related to smoking cessation counselling and biomedical measurements. SELECTION CRITERIA: Inclusion criteria were: a randomized controlled trial design; subjects participating in smoking cessation interventions; interventions based on a biomedical test to increase motivation to quit; control groups receiving all other components of intervention; an outcome of smoking cessation rate at least six months after the start of the intervention. DATA COLLECTION AND ANALYSIS: Two assessors independently conducted data extraction on each paper, with disagreements resolved by consensus. Results were expressed as a relative risk (RR) for smoking cessation with 95% confidence intervals (CI). Where appropriate a pooled effect was estimated using a Mantel-Haenszel fixed effect method. MAIN RESULTS: We included eleven trials using a variety of biomedical tests. Two pairs of trials had sufficiently similar recruitment, setting and interventions to calculate a pooled effect; there was no evidence that CO measurement in primary care (RR 1.06, 95% CI 0.85 to 1.32) or spirometry in primary care (RR 1.18, 95% CI 0.77 to 1.81) increased cessation rates. We did not pool the other seven trials. One trial in primary care detected a significant benefit of lung age feedback after spirometry (RR 2.12; 95% CI 1.24 to 3.62). One trial that used ultrasonography of carotid and femoral arteries and photographs of plaques detected a benefit (RR 2.77; 95% CI 1.04 to 7.41) but enrolled a population of light smokers. Five trials failed to detect evidence of a significant effect. One of these tested CO feedback alone and CO + genetic susceptibility as two different intervention; none of the three possible comparisons detected significant effects. Three others used a combination of CO and spirometry feedback in different settings, and one tested for a genetic marker. AUTHORS' CONCLUSIONS: There is little evidence about the effects of most types of biomedical tests for risk assessment. Spirometry combined with an interpretation of the results in terms of 'lung age' had a significant effect in a single good quality trial. Mixed quality evidence does not support the hypothesis that other types of biomedical risk assessment increase smoking cessation in comparison to standard treatment. Only two pairs of studies were similar enough in term of recruitment, setting, and intervention to allow meta-analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many drugs, finding the balance between efficacy and toxicity requires monitoring their concentrations in the patient's blood. Quantifying drug levels at the bedside or at home would have advantages in terms of therapeutic outcome and convenience, but current techniques require the setting of a diagnostic laboratory. We have developed semisynthetic bioluminescent sensors that permit precise measurements of drug concentrations in patient samples by spotting minimal volumes on paper and recording the signal using a simple point-and-shoot camera. Our sensors have a modular design consisting of a protein-based and a synthetic part and can be engineered to selectively recognize a wide range of drugs, including immunosuppressants, antiepileptics, anticancer agents and antiarrhythmics. This low-cost point-of-care method could make therapies safer, increase the convenience of doctors and patients and make therapeutic drug monitoring available in regions with poor infrastructure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Clinical scores may help physicians to better assess the individual risk/benefit of oral anticoagulant therapy. We aimed to externally validate and compare the prognostic performance of 7 clinical prediction scores for major bleeding events during oral anticoagulation therapy. METHODS: We followed 515 adult patients taking oral anticoagulants to measure the first major bleeding event over a 12-month follow-up period. The performance of each score to predict the risk of major bleeding and the physician's subjective assessment of bleeding risk were compared with the C statistic. RESULTS: The cumulative incidence of a first major bleeding event during follow-up was 6.8% (35/515). According to the 7 scoring systems, the proportions of major bleeding ranged from 3.0% to 5.7% for low-risk, 6.7% to 9.9% for intermediate-risk, and 7.4% to 15.4% for high-risk patients. The overall predictive accuracy of the scores was poor, with the C statistic ranging from 0.54 to 0.61 and not significantly different from each other (P=.84). Only the Anticoagulation and Risk Factors in Atrial Fibrillation score performed slightly better than would be expected by chance (C statistic, 0.61; 95% confidence interval, 0.52-0.70). The performance of the scores was not statistically better than physicians' subjective risk assessments (C statistic, 0.55; P=.94). CONCLUSION: The performance of 7 clinical scoring systems in predicting major bleeding events in patients receiving oral anticoagulation therapy was poor and not better than physicians' subjective assessments.