76 resultados para Ward-MLM
Resumo:
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
Resumo:
The purpose of this paper is to describe the development and to test the reliability of a new method called INTERMED, for health service needs assessment. The INTERMED integrates the biopsychosocial aspects of disease and the relationship between patient and health care system in a comprehensive scheme and reflects an operationalized conceptual approach to case mix or case complexity. The method is developed to enhance interdisciplinary communication between (para-) medical specialists and to provide a method to describe case complexity for clinical, scientific, and educational purposes. First, a feasibility study (N = 21 patients) was conducted which included double scoring and discussion of the results. This led to a version of the instrument on which two interrater reliability studies were performed. In study 1, the INTERMED was double scored for 14 patients admitted to an internal ward by a psychiatrist and an internist on the basis of a joint interview conducted by both. In study 2, on the basis of medical charts, two clinicians separately double scored the INTERMED in 16 patients referred to the outpatient psychiatric consultation service. Averaged over both studies, in 94.2% of all ratings there was no important difference between the raters (more than 1 point difference). As a research interview, it takes about 20 minutes; as part of the whole process of history taking it takes about 15 minutes. In both studies, improvements were suggested by the results. Analyses of study 1 revealed that on most items there was considerable agreement; some items were improved. Also, the reference point for the prognoses was changed so that it reflected both short- and long-term prognoses. Analyses of study 2 showed that in this setting, less agreement between the raters was obtained due to the fact that the raters were less experienced and the scoring procedure was more susceptible to differences. Some improvements--mainly of the anchor points--were specified which may further enhance interrater reliability. The INTERMED proves to be a reliable method for classifying patients' care needs, especially when used by experienced raters scoring by patient interview. It can be a useful tool in assessing patients' care needs, as well as the level of needed adjustment between general and mental health service delivery. The INTERMED is easily applicable in the clinical setting at low time-costs.
Resumo:
As an approved vaccine adjuvant for use in humans, alum has vast health implications, but, as it is a crystal, questions remain regarding its mechanism. Furthermore, little is known about the target cells, receptors, and signaling pathways engaged by alum. Here we report that, independent of inflammasome and membrane proteins, alum binds dendritic cell (DC) plasma membrane lipids with substantial force. Subsequent lipid sorting activates an abortive phagocytic response that leads to antigen uptake. Such activated DCs, without further association with alum, show high affinity and stable binding with CD4(+) T cells via the adhesion molecules intercellular adhesion molecule-1 (ICAM-1) and lymphocyte function-associated antigen-1 (LFA-1). We propose that alum triggers DC responses by altering membrane lipid structures. This study therefore suggests an unexpected mechanism for how this crystalline structure interacts with the immune system and how the DC plasma membrane may behave as a general sensor for solid structures.
New genetic loci implicated in fasting glucose homeostasis and their impact on type 2 diabetes risk.
Resumo:
Levels of circulating glucose are tightly regulated. To identify new loci influencing glycemic traits, we performed meta-analyses of 21 genome-wide association studies informative for fasting glucose, fasting insulin and indices of beta-cell function (HOMA-B) and insulin resistance (HOMA-IR) in up to 46,186 nondiabetic participants. Follow-up of 25 loci in up to 76,558 additional subjects identified 16 loci associated with fasting glucose and HOMA-B and two loci associated with fasting insulin and HOMA-IR. These include nine loci newly associated with fasting glucose (in or near ADCY5, MADD, ADRA2A, CRY2, FADS1, GLIS3, SLC2A2, PROX1 and C2CD4B) and one influencing fasting insulin and HOMA-IR (near IGF1). We also demonstrated association of ADCY5, PROX1, GCK, GCKR and DGKB-TMEM195 with type 2 diabetes. Within these loci, likely biological candidate genes influence signal transduction, cell proliferation, development, glucose-sensing and circadian regulation. Our results demonstrate that genetic studies of glycemic traits can identify type 2 diabetes risk loci, as well as loci containing gene variants that are associated with a modest elevation in glucose levels but are not associated with overt diabetes.
Resumo:
To target pharmacological prevention, instruments giving an approximation of an individual patient's risk of developing postoperative delirium are available. In view of the variable clinical presentation, identifying patients in whom prophylaxis has failed (that is, who develop delirium) remains a challenge. Several bedside instruments are available for the routine ward and ICU setting. Several have been shown to have a high specificity and sensitivity when compared with the standard definitions according to DSM-IV-TR and ICD-10. The Confusion Assessment Method (CAM) and a version specifically developed for the intensive care setting (CAM-ICU) have emerged as a standard. However, alternatives allowing grading of the severity of delirium are also available. In many units, the approach to delirium follows a three-step strategy. Initially, non-pharmacological multicomponent strategies are used for primary prevention. As a second step, pharmacological prophylaxis may be added. Perioperative administration of haloperidol has been shown to reduce the severity, but not the incidence, of delirium. Perioperative administration of atypical antipsychotics has been shown to reduce the incidence of delirium in specific groups of patients. In patients with delirium, both symptomatic and causal treatment of delirium need to be considered. So far symptomatic treatment of delirium is primarily based on antipsychotics. Currently, cholinesterase inhibitors cannot be recommended and the data on dexmedetomidine are inconclusive. With the exception of alcohol-withdrawal delirium, there is no role for benzodiazepines in the treatment of delirium. It is unclear whether treating delirium prevents long-term sequelae.
Resumo:
BACKGROUND: Adrenal insufficiency is a rare and potentially lethal disease if untreated. Several clinical signs and biological markers are associated with glucocorticoid failure but the importance of these factors for diagnosing adrenal insufficiency is not known. In this study, we aimed to assess the prevalence of and the factors associated with adrenal insufficiency among patients admitted to an acute internal medicine ward. METHODS: Retrospective, case-control study including all patients with high-dose (250 μg) ACTH-stimulation tests for suspected adrenal insufficiency performed between 2008 and 2010 in an acute internal medicine ward (n = 281). Cortisol values <550 nmol/l upon ACTH-stimulation test were considered diagnostic for adrenal insufficiency. Area under the ROC curve (AROC), sensitivity, specificity, negative and positive predictive values for adrenal insufficiency were assessed for thirteen symptoms, signs and biological variables. RESULTS: 32 patients (11.4%) presented adrenal insufficiency; the others served as controls. Among all clinical and biological parameters studied, history of glucocorticoid withdrawal was the only independent factor significantly associated with patients with adrenal insufficiency (Odds Ratio: 6.71, 95% CI: 3.08 -14.62). Using a logistic regression, a model with four significant and independent variable was obtained, regrouping history of glucocorticoid withdrawal (OR 7.38, 95% CI [3.18 ; 17.11], p-value <0.001), nausea (OR 3.37, 95% CI [1.03 ; 11.00], p-value 0.044), eosinophilia (OR 17.6, 95% CI [1.02; 302.3], p-value 0.048) and hyperkalemia (OR 2.41, 95% CI [0.87; 6.69], p-value 0.092). The AROC (95% CI) was 0.75 (0.70; 0.80) for this model, with 6.3 (0.8 - 20.8) for sensitivity and 99.2 (97.1 - 99.9) for specificity. CONCLUSIONS: 11.4% of patients with suspected adrenal insufficient admitted to acute medical ward actually do present with adrenal insufficiency, defined by an abnormal response to high-dose (250 μg) ACTH-stimulation test. A history of glucocorticoid withdrawal was the strongest factor predicting the potential adrenal failure. The combination of a history of glucocorticoid withdrawal, nausea, eosinophilia and hyperkaliemia might be of interest to suspect adrenal insufficiency.
Resumo:
OBJECTIVES: There are some common occupational agents and exposure circumstances where evidence of carcinogenicity is substantial but not yet conclusive for humans. The objectives are to identify research gaps and needs for twenty agents prioritized for review based on evidence of widespread human exposures and potential carcinogenicity in animals or humans. DATA SOURCES: A systematic review was conducted of new data published since the most recent pertinent IARC monograph meeting. DATA EXTRACTION: Reviewers were charged with identifying data gaps and general and specific approaches to address them, focusing on research that would be important in resolving classification uncertainties. An expert meeting brought reviewers together to discuss each agent and the identified data gaps and approaches. DATA SYNTHESIS: Several overarching issues were identified that pertained to multiple agents; these included the importance of recognizing that carcinogenic agents can act through multiple toxicity pathways and mechanisms, including epigenetic mechanisms, oxidative stress and immuno- and hormonal modulation. CONCLUSIONS: Studies in occupational populations provide important opportunities to understand the mechanisms through which exogenous agents cause cancer and intervene to prevent human exposure and/or prevent or detect cancer among those already exposed. Scientific developments are likely to increase the challenges and complexities of carcinogen testing and evaluation in the future, and epidemiologic studies will be particularly critical to inform carcinogen classification and risk assessment processes.[Authors]
Resumo:
Infectious and inflammatory diseases have repeatedly shown strong genetic associations within the major histocompatibility complex (MHC); however, the basis for these associations remains elusive. To define host genetic effects on the outcome of a chronic viral infection, we performed genome-wide association analysis in a multiethnic cohort of HIV-1 controllers and progressors, and we analyzed the effects of individual amino acids within the classical human leukocyte antigen (HLA) proteins. We identified >300 genome-wide significant single-nucleotide polymorphisms (SNPs) within the MHC and none elsewhere. Specific amino acids in the HLA-B peptide binding groove, as well as an independent HLA-C effect, explain the SNP associations and reconcile both protective and risk HLA alleles. These results implicate the nature of the HLA-viral peptide interaction as the major factor modulating durable control of HIV infection.
Resumo:
The neurobiological basis of psychogenic movement disorders remains poorly understood and the management of these conditions difficult. Functional neuroimaging studies have provided some insight into the pathophysiology of disorders implicating particularly the prefrontal cortex, but there are no studies on psychogenic dystonia, and comparisons with findings in organic counterparts are rare. To understand the pathophysiology of these disorders better, we compared the similarities and differences in functional neuroimaging of patients with psychogenic dystonia and genetically determined dystonia, and tested hypotheses on the role of the prefrontal cortex in functional neurological disorders. Patients with psychogenic (n = 6) or organic (n = 5, DYT1 gene mutation positive) dystonia of the right leg, and matched healthy control subjects (n = 6) underwent positron emission tomography of regional cerebral blood flow. Participants were studied during rest, during fixed posturing of the right leg and during paced ankle movements. Continuous surface electromyography and footplate manometry monitored task performance. Averaging regional cerebral blood flow across all tasks, the organic dystonia group showed abnormal increases in the primary motor cortex and thalamus compared with controls, with decreases in the cerebellum. In contrast, the psychogenic dystonia group showed the opposite pattern, with abnormally increased blood flow in the cerebellum and basal ganglia, with decreases in the primary motor cortex. Comparing organic dystonia with psychogenic dystonia revealed significantly greater regional blood flow in the primary motor cortex, whereas psychogenic dystonia was associated with significantly greater blood flow in the cerebellum and basal ganglia (all P < 0.05, family-wise whole-brain corrected). Group × task interactions were also examined. During movement, compared with rest, there was abnormal activation in the right dorsolateral prefrontal cortex that was common to both organic and psychogenic dystonia groups (compared with control subjects, P < 0.05, family-wise small-volume correction). These data show a cortical-subcortical differentiation between organic and psychogenic dystonia in terms of regional blood flow, both at rest and during active motor tasks. The pathological prefrontal cortical activation was confirmed in, but was not specific to, psychogenic dystonia. This suggests that psychogenic and organic dystonia have different cortical and subcortical pathophysiology, while a derangement in mechanisms of motor attention may be a feature of both conditions.
Resumo:
Permian to Late Cretaceous allochthonous sedimentary and volcanic rocks exposed in the Batain area (eastern Oman Margin) have received comparably little attention in the past. They largely were considered as part of the Hamrat Duru Group (Hawasina Complex) of the northern Oman Mountains. Structural, kinematic and biostratigraphic results from our mapping campaign in the Batain area have now revealed, that emplacement of these units occurred in a WNW direction during latest Cretaceous/Early Paleogene time. This clearly contrasts with previous models that postulated a S-ward directed obduction in Campanian times such as recorded from the Hawasina Complex and Semail Ophiolite in the Oman Mountains. We herewith establish the `'Batain Group'' comprising all Permian to Late Cretaceous allochthonous units in the Batain Area. These are: 1.) the Permian Qarari Formation deposited in the toe of a slope setting; 2.) the Late Permian to late Liassic Al Jil Formation comprising periplatform detritus and very coarse breccias; 3.) the Scythian to Norian Matbat Formation formed by slope deposits; 4.) the Early Jurassic to early Oxfordian Guwayza Formation with high energy platform detritus; 5.) the Mid-Jurassic to earliest Cretaceous Ruwaydah Formation seamount; and 6.) the Oxfordian to Santonian Wahrah Formation, mainly radiolarites; and 7.) the Santonian to latest Maastrichtian Fayah Formation built by flysch-type sediments. These sedimentary and volcanic rocks represent deposits of the former ``Batain basin'' off eastern-Oman, destroyed by compressional tectonics at the Cretaceous/Paleogene transition. For tectono-stratigraphic reasons the Batain Group does not form part of the Hawasina Complex.
Resumo:
BACKGROUND: Inpatient case fatality from severe malaria remains high in much of sub-Saharan Africa. The majority of these deaths occur within 24 hours of admission, suggesting that pre-hospital management may have an impact on the risk of case fatality. METHODS: Prospective cohort study, including questionnaire about pre-hospital treatment, of all 437 patients admitted with severe febrile illness (presumed to be severe malaria) to the paediatric ward in Sikasso Regional Hospital, Mali, in a two-month period. FINDINGS: The case fatality rate was 17.4%. Coma, hypoglycaemia and respiratory distress at admission were associated with significantly higher mortality. In multiple logistic regression models and in a survival analysis to examine pre-admission risk factors for case fatality, the only consistent and significant risk factor was sex. Girls were twice as likely to die as boys (AOR 2.00, 95% CI 1.08-3.70). There was a wide variety of pre-hospital treatments used, both modern and traditional. None had a consistent impact on the risk of death across different analyses. Reported use of traditional treatments was not associated with post-admission outcome. INTERPRETATION: Aside from well-recognised markers of severity, the main risk factor for death in this study was female sex, but this study cannot determine the reason why. Differences in pre-hospital treatments were not associated with case fatality.
Resumo:
A nationwide survey was conducted in Switzerland to assess the quality level of osteoporosis management in patients aged 50 years or older presenting with a fragility fracture to the emergency ward of the participating hospitals. Eight centres recruited 4966 consecutive patients who presented with one or more fractures between 2004 and 2006. Of these, 3667 (2797 women, 73.8 years old and 870 men, 73.0 years old in average) were considered as having a fragility fracture and included in the survey. Included patients presented with a fracture of the upper limbs (30.7%), lower limbs (26.4%), axial skeleton (19.5%) or another localisation, including malleolar fractures (23.4%). Thirty-two percent reported one or more previous fractures during adulthood. Of the 2941 (80.2%) hospitalised women and men, only half returned home after discharge. During diagnostic workup, dual x-ray absorptiometry (DXA) measurement was performed in 31.4% of the patients only. Of those 46.0% had a T-score < or =-2.5 SD and 81.1% < or =-1.0 SD. Osteoporosis treatment rate increased from 26.3% before fracture to 46.9% after fracture in women and from 13.0% to 30.3% in men. However, only 24.0% of the women and 13.8% of the men were finally adequately treated with a bone active substance, generally an oral bisphosphonate, with or without calcium / vitamin D supplements. A positive history of previous fracture vs none increased the likelihood of getting treatment with a bone active substance (36.6 vs 17.9%, ? 18.7%, 95% CI 15.1 to 22.3, and 22.6 vs 9.9%, ? 12.7%, CI 7.3 to 18.5, in women and men, respectively). In Switzerland, osteoporosis remains underdiagnosed and undertreated in patients aged 50 years and older presenting with a fragility fracture.
Resumo:
BACKGROUND: An association between alcohol consumption and injury is clearly established from volume of drinking, heavy episodic drinking (HED), and consumption before injury. Little is known, however, about how their interaction raises risk of injury and what combination of factors carries the highest risk. This study explores which of 11 specified groups of drinkers (a) are at high risk and (b) contribute most to alcohol-attributable injuries. METHODS: In all, 8,736 patients, of whom 5,077 were injured, admitted to the surgical ward of the emergency department of Lausanne University Hospital between January 1, 2003, and June 30, 2004, were screened for alcohol use. Eleven groups were constructed on the basis of usual patterns of intake and preattendance drinking. Odds ratios (ORs) comparing injured and noninjured were derived, and alcohol-attributable fractions of injuries were calculated from ORs and prevalence of exposure groups. RESULTS: Risk of injury increased with volume of drinking, HED, and preattendance drinking. For both sexes, the highest risk was associated with low intake, HED, and 4 (women), 5 (men), or more drinks before injury. At the same level of preattendance drinking, high-volume drinkers were at lower risk than low-volume drinkers. In women, the group of low-risk non-HED drinkers taking fewer than 4 drinks suffered 47.5% of the alcohol-attributable injuries in contrast to only 20.4% for men. Low-volume male drinkers with HED had more alcohol-attributable injuries than that of low-volume female drinkers with HED (46.9% vs 23.2%). CONCLUSIONS: Although all groups of drinkers are at increased risk of alcohol-related injury, those who usually drink little but on occasion heavily are at particular risk. The lower risk of chronic heavy drinkers may be due to higher tolerance of alcohol. Prevention should thus target heavy-drinking occasions. Low-volume drinking women without HED and with only little preattendance drinking experienced a high proportion of injuries; such women would be well advised to drink very little or to take other special precautions in risky circumstances.
Resumo:
OBJECTIVE: Most studies on alcohol as a risk factor for injuries have been mechanism specific, and few have considered several mechanisms simultaneously or reported alcohol-attributable fractions (AAFs)-which was the aim of the current study. METHOD: Data from 3,592 injured and 3,489 noninjured patients collected between January 2003 and June 2004 in the surgical ward of the emergency department of the Lausanne University Hospital (Switzerland) were analyzed. Four injury mechanisms derived from the International Classification of Diseases, 10th Revision, were considered: transportation-related injuries, falls, exposure to forces and other events, and interpersonal violence. Multinomial logistic regression models were calculated to estimate the risk relationships of different levels of alcohol consumption, using noninjured patients as quasi-controls. The AAFs were then calculated. RESULTS: Risk relationships between injury and acute consumption were found across all mechanisms, commonly resulting in dose-response relationships. Marked differences between mechanisms were observed for relative risks and AAFs, which varied between 15.2% and 33.1% and between 10.1% and 35.9%, depending on the time window of consumption (either 6 hours or 24 hours before injury, respectively). Low and medium levels of alcohol consumption generally were associated with the most AAFs. CONCLUSIONS: This study underscores the implications of even low levels of alcohol consumption on the risk of sustaining injuries through any of the mechanisms considered. Substantial AAFs are reported for each mechanism, particularly for injuries resulting from interpersonal violence. Observation of a so-called preventive paradox phenomenon is discussed, and prevention or intervention measures are described.