40 resultados para Berman, Marshall
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
Canadian healthcare is changing. Over the course of the past decade, the Health Care in Canada Survey (HCIC) has annually measured the reactions of the public and professional stakeholders to many of these change forces. In HCIC 2008, for the first time, the public's perception of their health status and all stakeholders' views of the burden and effective management of chronic diseases were sought. Overall, Canadians perceive themselves as healthy, with 84% of adults reporting good-to-excellent health. However, good health decreased with age as the occurrence of chronic illness rose, from 12% in the age group 18-24 to 65% for the population =65 years. More than 70% of all stakeholders were strongly or somewhat supportive of the implementation of coordinated care, or disease management programs, to improve the care of patients with chronic illnesses. Concordant support was also expressed for key disease management components, including coordinated interventions to improve home, community and self-care; increased wellness promotion; and increased use of clinical measurements and feedback to all stakeholders. However, there were also important areas of non-concordance. For example, the public and doctors consistently expressed less support than other stakeholders for the value of team care, including the use of non-physician professionals to provide patient care; increased patient involvement in decision-making; and the use of electronic health records to facilitate communication. The actual participation in disease management programs averaged 34% for professionals and 25% for the public. We conclude that chronic diseases are common, age-related and burdensome in Canada. Disease management or coordinated intervention often delivered by teams is also relatively common, despite its less-than-universal acceptance by all stakeholders. Further insights are needed, particularly into the variable perceptions of the value and efficacy of team-delivered healthcare and its important components.
Resumo:
Arising from M. A. Nowak, C. E. Tarnita & E. O. Wilson 466, 1057-1062 (2010); Nowak et al. reply. Nowak et al. argue that inclusive fitness theory has been of little value in explaining the natural world, and that it has led to negligible progress in explaining the evolution of eusociality. However, we believe that their arguments are based upon a misunderstanding of evolutionary theory and a misrepresentation of the empirical literature. We will focus our comments on three general issues.
Resumo:
Introduction: Low brain tissue oxygen pressure (PbtO2) is associated with worse outcome in patients with severe traumatic brain injury (TBI). However, it is unclear whether brain tissue hypoxia is merely a marker of injury severity or a predictor of prognosis, independent from intracranial pressure (ICP) and injury severity. Hypothesis: We hypothesized that brain tissue hypoxia was an independent predictor of outcome in patients wih severe TBI, irrespective of elevated ICP and of the severity of cerebral and systemic injury. Methods: This observational study was conducted at the Neurological ICU, Hospital of the University of Pennsylvania, an academic level I trauma center. Patients admitted with severe TBI who had PbtO2 and ICP monitoring were included in the study. PbtO2, ICP, mean arterial pressure (MAP) and cerebral perfusion pressure (CPP = MAP-ICP) were monitored continuously and recorded prospectively every 30 min. Using linear interpolation, duration and cumulative dose (area under the curve, AUC) of brain tissue hypoxia (PbtO2 < 15 mm Hg), elevated ICP >20 mm Hg and low CPP <60 mm Hg were calculated, and the association with outcome at hospital discharge, dichotomized as good (Glasgow Outcome Score [GOS] 4-5) vs. poor (GOS 1-3), was analyzed. Results: A total of 103 consecutive patients, monitored for an average of 5 days, was studied. Brain tissue hypoxia was observed in 66 (64%) patients despite ICP was < 20 mm Hg and CPP > 60 mm Hg (72 +/- 39% and 49 +/- 41% of brain hypoxic time, respectively). Compared with patients with good outcome, those with poor outcome had a longer duration of brain hypoxia (1.7 +/- 3.7 vs. 8.3 +/- 15.9 hrs, P<0.01), as well as a longer duration (11.5 +/- 16.5 vs. 21.6 +/- 29.6 hrs, P=0.03) and a greater cumulative dose (56 +/- 93 vs. 143 +/- 218 mm Hg*hrs, P<0.01) of elevated ICP. By multivariable logistic regression, admission Glasgow Coma Scale (OR, 0.83, 95% CI: 0.70-0.99, P=0.04), Marshall CT score (OR 2.42, 95% CI: 1.42-4.11, P<0.01), APACHE II (OR 1.20, 95% CI: 1.03-1.43, P=0.03), and the duration of brain tissue hypoxia (OR 1.13; 95% CI: 1.01-1.27; P=0.04) were all significantly associated with poor outcome. No independent association was found between the AUC for elevated ICP and outcome (OR 1.01, 95% CI 0.97-1.02, P=0.11) in our prospective cohort. Conclusions: In patients with severe TBI, brain tissue hypoxia is frequent, despite normal ICP and CPP, and is associated with poor outcome, independent of intracranial hypertension and the severity of cerebral and systemic injury. Our findings indicate that PbtO2 is a strong physiologic prognostic marker after TBI. Further study is warranted to examine whether PbtO2-directed therapy improves outcome in severely head-injured patients .
Resumo:
PURPOSE: To investigate the relationship between hemoglobin (Hgb) and brain tissue oxygen tension (PbtO(2)) after severe traumatic brain injury (TBI) and to examine its impact on outcome. METHODS: This was a retrospective analysis of a prospective cohort of severe TBI patients whose PbtO(2) was monitored. The relationship between Hgb-categorized into four quartiles (≤9; 9-10; 10.1-11; >11 g/dl)-and PbtO(2) was analyzed using mixed-effects models. Anemia with compromised PbtO(2) was defined as episodes of Hgb ≤ 9 g/dl with simultaneous PbtO(2) < 20 mmHg. Outcome was assessed at 30 days using the Glasgow outcome score (GOS), dichotomized as favorable (GOS 4-5) vs. unfavorable (GOS 1-3). RESULTS: We analyzed 474 simultaneous Hgb and PbtO(2) samples from 80 patients (mean age 44 ± 20 years, median GCS 4 (3-7)). Using Hgb > 11 g/dl as the reference level, and controlling for important physiologic covariates (CPP, PaO(2), PaCO(2)), Hgb ≤ 9 g/dl was the only Hgb level that was associated with lower PbtO(2) (coefficient -6.53 (95 % CI -9.13; -3.94), p < 0.001). Anemia with simultaneous PbtO(2) < 20 mmHg, but not anemia alone, increased the risk of unfavorable outcome (odds ratio 6.24 (95 % CI 1.61; 24.22), p = 0.008), controlling for age, GCS, Marshall CT grade, and APACHE II score. CONCLUSIONS: In this cohort of severe TBI patients whose PbtO(2) was monitored, a Hgb level no greater than 9 g/dl was associated with compromised PbtO(2). Anemia with simultaneous compromised PbtO(2), but not anemia alone, was a risk factor for unfavorable outcome, irrespective of injury severity.
Resumo:
Both obesity and being underweight have been associated with increased mortality. Underweight, defined as a body mass index (BMI) ≤ 18.5 kg per m(2) in adults and ≤ -2 standard deviations from the mean in children, is the main sign of a series of heterogeneous clinical conditions including failure to thrive, feeding and eating disorder and/or anorexia nervosa. In contrast to obesity, few genetic variants underlying these clinical conditions have been reported. We previously showed that hemizygosity of a ∼600-kilobase (kb) region on the short arm of chromosome 16 causes a highly penetrant form of obesity that is often associated with hyperphagia and intellectual disabilities. Here we show that the corresponding reciprocal duplication is associated with being underweight. We identified 138 duplication carriers (including 132 novel cases and 108 unrelated carriers) from individuals clinically referred for developmental or intellectual disabilities (DD/ID) or psychiatric disorders, or recruited from population-based cohorts. These carriers show significantly reduced postnatal weight and BMI. Half of the boys younger than five years are underweight with a probable diagnosis of failure to thrive, whereas adult duplication carriers have an 8.3-fold increased risk of being clinically underweight. We observe a trend towards increased severity in males, as well as a depletion of male carriers among non-medically ascertained cases. These features are associated with an unusually high frequency of selective and restrictive eating behaviours and a significant reduction in head circumference. Each of the observed phenotypes is the converse of one reported in carriers of deletions at this locus. The phenotypes correlate with changes in transcript levels for genes mapping within the duplication but not in flanking regions. The reciprocal impact of these 16p11.2 copy-number variants indicates that severe obesity and being underweight could have mirror aetiologies, possibly through contrasting effects on energy balance.
Resumo:
Objectifs: Comparaison des performances en qualité d'image des deux types de systèmes CR. Matériels et méthodes: Les performances ont été mesurées au moyen de la fonction de transfert de modulation (FTM), du spectre de bruit, de l'efficacité quantique de détection (DQE),le seuil de détection du contraste en épaisseur d'or et la dose glandulaire moyenne. Les systèmes CR à aiguilles Agfa HM5.0 et Carestream SNP-M1 ont étécomparés aux systèmes à poudre Agfa MM3.0, Fuji ProfectCS et Carestream EHR-M3. Résultats: La FTM à 5mm-1 de Agfa HM5,0 et Carestream SNP-M1 est 0,21 et 0,27, et entre 0,14 et 0,16 pour les systèmes à poudre. Un DQE maximal de 0,51 et 0,5 a étéobtenu pour Agfa HM5,0 et Carestream SNP-M1, et 0,35, 0,50 et 0,34 pour Agfa MM3,0, Fuji Profect et Carestream EHR-M3. Des valeurs de DQE à 5mm-1 de0,18 et 0,13 ont été obtenues pour Agfa HM5,0 et Carestream SNP-M1, et entre 0,04 et 0,065 pour les systèmes à poudre. Les seuils de détection du contrastede Agfa HM5,0 et Carestream SNP-M1 étaient 1,33im et 1,29im, et 1,45im et 1,63im pour Agfa MM3,0 et Fuji Profect. Conclusion: Les systèmes à aiguilles offrent des meilleures FTM et DQE et un seuil de visibilité du contraste plus bas que les systèmes à poudre .
Resumo:
Medulloblastoma, the most common malignant paediatric brain tumour, is currently treated with nonspecific cytotoxic therapies including surgery, whole-brain radiation, and aggressive chemotherapy. As medulloblastoma exhibits marked intertumoural heterogeneity, with at least four distinct molecular variants, previous attempts to identify targets for therapy have been underpowered because of small samples sizes. Here we report somatic copy number aberrations (SCNAs) in 1,087 unique medulloblastomas. SCNAs are common in medulloblastoma, and are predominantly subgroup-enriched. The most common region of focal copy number gain is a tandem duplication of SNCAIP, a gene associated with Parkinson's disease, which is exquisitely restricted to Group 4α. Recurrent translocations of PVT1, including PVT1-MYC and PVT1-NDRG1, that arise through chromothripsis are restricted to Group 3. Numerous targetable SCNAs, including recurrent events targeting TGF-β signalling in Group 3, and NF-κB signalling in Group 4, suggest future avenues for rational, targeted therapy.
Resumo:
L'épuisement des énergies fossiles est un thème d'actualité dont les prémices datent, selon l'opinion courante, des années 1970 et du premier choc pétrolier. En réalité, c'est une préoccupation plus ancienne, intimement liée à l'ère industrielle. Dans la deuxième partie du XIXème siècle, les économistes se sont penchés sur la question de l'épuisement des minerais, 'objet non identifié' jusqu'alors et nécessitant la mise sur pied de nouveaux outils d'analyse (effet-rebond chez Jevons, rente minière chez Marshall-Einaudi notamment). Avec le progrès des techniques et l'apparition de nouvelles énergies (pétrole, hydro-électricité), leurs craintes de déclin industriel se sont progressivement dissipées dans les années 1910 et 1920. Mais ces évolutions tenant à l'histoire des faits ne sont pas les seules à considérer. Des facteurs internes à la discipline économique, comme l'émergence du marginalisme dans les années 1870 et de la théorie de l'épargne et du capital dans les années 1890, ont aussi changé le regard des économistes sur la question de l'épuisement des ressources. Pourquoi ? Comment ? Quels enseignements peut-on en tirer pour les défis environnementaux d'aujourd'hui ? Voilà les questions qui sont traitées dans ce travail de thèse.
Resumo:
The northeastern portion of the Mont Blanc massif in western Switzerland is predominantly comprised of the granitic rocks of the Mont Blanc intrusive suit, and the Mont Blanc basement gneisses. Within these metamorphic rocks are a variety of sub-economic Fe skarns. The mineral assemblages and fluid inclusions from these rocks have been used to derive age, pressure, temperature and fluid composition constraints for two Variscan events. Metamorphic hornblendes within the assemblages from the basement amphibolites and iron sk:lms have been dated using Ar-40/Ar-39, and indicate that these metamorphic events have a minimum age of approximately 334 Ma. Garnet-hornblende-plagioclase thermobarometry and stable isotope data obtained from the basement amphibolites are consistent with metamorphic temperatures in the range 515 to 580 degrees C, and pressures ranging from 5 to 8 kbar. Garnet-hornblende-magnetite thermobarometry and fluid inclusion studies indicate that the iron skarns formed at slightly lower temperatures, ranging from 400 to 500 degrees C in the presence of saline fluids at formational pressures similar to those experienced by the basement amphibolites. Late Paleozoic minimum uplift rates and geothermal gradients calculated using these data and the presence of Ladinien ichnofossils are on the order of 0.32 mm/year and 20 degrees C/km respectively. These uplift rates and geothermal gradients differ from those obtained from the neighbouring Aiguilles Rouges massif and indicate that these two massifs experienced different metamorphic conditions during the Carboniferous and Permian periods. During the early to late Carboniferous period the relative depths of the two massifs were reversed with the Aiguilles Rouges being initially unroofed at a much greater rate than the Mont Blanc, but experiencing relatively slower uplift rates near the termination of the Variscan orogeny.
Resumo:
BACKGROUND: Abdominal infections are frequent causes of sepsis and septic shock in the intensive care unit (ICU) and are associated with adverse outcomes. We analyzed the characteristics, treatments and outcome of ICU patients with abdominal infections using data extracted from a one-day point prevalence study, the Extended Prevalence of Infection in the ICU (EPIC) II. METHODS: EPIC II included 13,796 adult patients from 1,265 ICUs in 75 countries. Infection was defined using the International Sepsis Forum criteria. Microbiological analyses were performed locally. Participating ICUs provided patient follow-up until hospital discharge or for 60 days. RESULTS: Of the 7,087 infected patients, 1,392 (19.6%) had an abdominal infection on the study day (60% male, mean age 62 ± 16 years, SAPS II score 39 ± 16, SOFA score 7.6 ± 4.6). Microbiological cultures were positive in 931 (67%) patients, most commonly Gram-negative bacteria (48.0%). Antibiotics were administered to 1366 (98.1%) patients. Patients who had been in the ICU for ≤ 2 days prior to the study day had more Escherichia coli, methicillin-sensitive Staphylococcus aureus and anaerobic isolates, and fewer enterococci than patients who had been in the ICU longer. ICU and hospital mortality rates were 29.4% and 36.3%, respectively. ICU mortality was higher in patients with abdominal infections than in those with other infections (29.4% vs. 24.4%, p < 0.001). In multivariable analysis, hematological malignancy, mechanical ventilation, cirrhosis, need for renal replacement therapy and SAPS II score were independently associated with increased mortality. CONCLUSIONS: The characteristics, microbiology and antibiotic treatment of abdominal infections in critically ill patients are diverse. Mortality in patients with isolated abdominal infections was higher than in those who had other infections.
Resumo:
Raman spectroscopy has been used by fluid inclusionists to: 1) identify and quantitatively determine the relative abundances of gaseous species within fluid inclusions; 2) identify solid phases precipitating from, or accidentally trapped, within fluid inclusions; and 3) determine the detection limits of the C-13/C-12 ratio in the CO2 bearing phase of fluid inclusions.
Resumo:
Le présent travail rend compte de la double articulation d'analyse pensée par Antoine Berman dans Pour une critique des traductions: John Donne (Gallimard, 1994). La méthode bermanienne s'attache tant à une histoire, événementielle et individuelle, des traductions qu'à une analyse des textes à la lumière de leurs entours (paratextes, projets de traduction, etc.). Dans une première partie, nous tenterons de décrire et de comprendre à l'aide d'un panorama historique l'importation de la poésie de Rilke en traduction française, des premières versions du début du XXe siècle aux dernières traductions des Élégies de Duino (2008, 2010). Reprenant la formule de Berman, nous « irons au traducteur », à sa façon de traduire et à la traduction qu'il livre. Nous nous pencherons ainsi sur l'identité de ces traducteurs (premiers ou bien nouveaux), sur leur statut socioculturel ainsi que sur les circonstances dans lesquelles ils furent amenés à traduire et virent leur travail publié. Il s'agira d'établir de façon synthétique ce que Berman, sous l'influence de H. R. Jauss, dénomme l' « horizon » d'une traduction qui, à une date donnée, prend en compte une pluralité de critères allant de traits propres au traducteur aux codes poétiques en vigueur dans le vaste champ des Lettres et la société. Nous replacerons ainsi la traduction dans le plus large contexte du transfert culturel et de l'importation et examinerons les traducteurs en présence : les universitaires, les poètes, les traducteurs à plein temps et, dans une moindre mesure, les philosophes. De ce panorama historique émergera l'idée d'une concurrence entre les multiples traducteurs de la poésie de Rilke, plus spécialement entre universitaires et poètes. Dans une seconde partie, reflet de l'autre facette de la pensée bermanienne, nous procèderons à la comparaison et à l'évaluation critique de plusieurs versions françaises de la première Élégie de Duino - opus poétique rilkéen le plus retraduit en français. Notre corpus se limitera à cette première Élégie et à une dizaine de versions françaises que nous faisons dialoguer ou s'opposer. Partant de premières considérations sur l'enveloppe prosodique et typographique du poème, qui nous permettent de saisir la diversité des entreprises et de cerner tant des lignes de force communes que la singularité d'expérimentations plus marginales, nous « confronterons » ensemble les quatre premières versions françaises de la première Élégie, accomplies quasi simultanément dans les années 1930 par des traducteurs d'horizons variés (un germaniste, J.F. Angelloz, une artiste-peintre, L. Albert-Lasard, un traducteur de métier, M. Betz, et un poète, A. Guerne). Il s'agira de saisir l'apport de chacune d'entre elles, ainsi que le type de lien qui les unit (ou les oppose). L'étude de la quatrième version, celle d'Armel Guerne, nous mènera presque naturellement vers la question de la perception de l'écart poétique et du caractère extra-ordinaire de l'original. A la lumière de cette problématique cardinale en poésie, nous procèderons à la comparaison de versions issues des cinquante dernières années et, en considérant plusieurs éléments de sens, nous tenterons de voir comment chaque traducteur, qui est aussi et avant tout un lecteur, a perçu et restitué dans son texte français la matière poétique propre de l'original. Au terme de ce parcours contrastif parmi différentes versions de la première Élégie, nous confronterons les résultats de notre analyse textuelle et le constat de concurrence qui se dégageait de la première partie. Il s'agira de voir alors si la pratique traductive, telle qu'elle se manifeste concrètement au niveau du texte, reflète un antagonisme particulier entre Poésie et Université, ou s'il convient au contraire de relativiser, voire démystifier cette dichotomie.
Resumo:
Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.