33 resultados para Royal College of Surgeons (Londres)
Resumo:
OBJECTIVES To evaluate the impact of preoperative sepsis on risk of postoperative arterial and venous thromboses. DESIGN Prospective cohort study using the National Surgical Quality Improvement Program database of the American College of Surgeons (ACS-NSQIP). SETTING Inpatient and outpatient procedures in 374 hospitals of all types across the United States, 2005-12. PARTICIPANTS 2,305,380 adults who underwent surgical procedures. MAIN OUTCOME MEASURES Arterial thrombosis (myocardial infarction or stroke) and venous thrombosis (deep venous thrombosis or pulmonary embolism) in the 30 days after surgery. RESULTS Among all surgical procedures, patients with preoperative systemic inflammatory response syndrome or any sepsis had three times the odds of having an arterial or venous postoperative thrombosis (odds ratio 3.1, 95% confidence interval 3.0 to 3.1). The adjusted odds ratios were 2.7 (2.5 to 2.8) for arterial thrombosis and 3.3 (3.2 to 3.4) for venous thrombosis. The adjusted odds ratios for thrombosis were 2.5 (2.4 to 2.6) in patients with systemic inflammatory response syndrome, 3.3 (3.1 to 3.4) in patients with sepsis, and 5.7 (5.4 to 6.1) in patients with severe sepsis, compared with patients without any systemic inflammation. In patients with preoperative sepsis, both emergency and elective surgical procedures had a twofold increased odds of thrombosis. CONCLUSIONS Preoperative sepsis represents an important independent risk factor for both arterial and venous thromboses. The risk of thrombosis increases with the severity of the inflammatory response and is higher in both emergent and elective surgical procedures. Suspicion of thrombosis should be higher in patients with sepsis who undergo surgery.
Resumo:
BACKGROUND Many orthopaedic surgical procedures can be performed with either regional or general anesthesia. We hypothesized that total hip arthroplasty with regional anesthesia is associated with less postoperative morbidity and mortality than total hip arthroplasty with general anesthesia. METHODS This retrospective propensity-matched cohort study utilizing the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database included patients who had undergone total hip arthroplasty from 2007 through 2011. After matching, logistic regression was used to determine the association between the type of anesthesia and deep surgical site infections, hospital length of stay, thirty-day mortality, and cardiovascular and pulmonary complications. RESULTS Of 12,929 surgical procedures, 5103 (39.5%) were performed with regional anesthesia. The adjusted odds for deep surgical site infections were significantly lower in the regional anesthesia group than in the general anesthesia group (odds ratio [OR] = 0.38; 95% confidence interval [CI] = 0.20 to 0.72; p < 0.01). The hospital length of stay (geometric mean) was decreased by 5% (95% CI = 3% to 7%; p < 0.001) with regional anesthesia, which translates to 0.17 day for each total hip arthroplasty. Regional anesthesia was also associated with a 27% decrease in the odds of prolonged hospitalization (OR = 0.73; 95% CI = 0.68 to 0.89; p < 0.001). The mortality rate was not significantly lower with regional anesthesia (OR = 0.78; 95% CI = 0.43 to 1.42; p > 0.05). The adjusted odds for cardiovascular complications (OR = 0.61; 95% CI = 0.44 to 0.85) and respiratory complications (OR = 0.51; 95% CI = 0.33 to 0.81) were all lower in the regional anesthesia group. CONCLUSIONS Compared with general anesthesia, regional anesthesia for total hip arthroplasty was associated with a reduction in deep surgical site infection rates, hospital length of stay, and rates of postoperative cardiovascular and pulmonary complications. These findings could have an important medical and economic impact on health-care practice.
Resumo:
BackgroundIn Switzerland assisted suicide is legal if no self-interest is involved.AimsTo compare the strength and direction of associations with sociodemographic factors between assisted and unassisted suicides.MethodWe calculated rates and used Cox and logistic regression models in a longitudinal study of the Swiss population.ResultsAnalyses were based on 5 004 403 people, 1301 assisted and 5708 unassisted suicides from 2003 to 2008. The rate of unassisted suicides was higher in men than in women, rates of assisted suicides were similar in men and women. Higher education was positively associated with assisted suicide, but negatively with unassisted. Living alone, having no children and no religious affiliation were associated with higher rates of both.ConclusionsSome situations that indicate greater vulnerability such as living alone were associated with both assisted and unassisted suicide. Among the terminally ill, women were more likely to choose assisted suicide, whereas men died more often by unassisted suicide.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
BACKGROUND: The aetiology of visual hallucinations is poorly understood in dementia with Lewy bodies. Pathological alterations in visual cortical excitability may be one contributory mechanism. AIMS: To determine visual cortical excitability in people with dementia with Lewy bodies compared with aged-matched controls and also the relationship between visual cortical excitability and visual hallucinations in dementia with Lewy bodies. METHOD: Visual cortical excitability was determined by using transcranial magnetic stimulation (TMS) applied to the occiput to elicit phosphenes (transient subjective visual responses) in 21 patients with dementia with Lewy bodies and 19 age-matched controls. RESULTS: Phosphene parameters were similar between both groups. However, in the patients with dementia with Lewy bodies, TMS measures of visual cortical excitability correlated strongly with the severity of visual hallucinations (P = 0.005). Six patients with dementia with Lewy bodies experienced visual hallucination-like phosphenes (for example, seeing people or figures on stimulation) compared with none of the controls (P = 0.02). CONCLUSIONS: Increased visual cortical excitability in dementia with Lewy bodies does not appear to explain visual hallucinations but it may be a marker for their severity.
Resumo:
We examined survival associated with locally advanced esophageal squamous cell cancer (SCC) to evaluate if treatment without surgery could be considered adequate.
Resumo:
The genesis of Tourette syndrome is still unknown, but a core role for the pathways of cortico-striatal-thalamic-cortical circuitry (CSTC) is supposed. Volume-rendering magnetic resonance imaging data-sets were analysed in 14 boys with Tourette syndrome and 15 age-matched controls using optimised voxel-based morphometry. Locally increased grey-matter volumes (corrected P < 0.001) were found bilaterally in the ventral putamen. Regional decreases in grey matter were observed in the left hippocampal gyrus. This unbiased analysis confirmed an association between striatal abnormalities and Tourette syndrome, and the hippocampal volume alterations indicate an involvement of temporolimbic pathways of the CSTC in the syndrome.
Resumo:
BACKGROUND: Hallucinations are perceptions in the absence of a corresponding external sensory stimulus. However, during auditory verbal hallucinations, activation of the primary auditory cortex has been described. AIMS: The objective of this study was to investigate whether this activation of the auditory cortex contributes essentially to the character of hallucinations and attributes them to alien sources, or whether the auditory activation is a sign of increased general auditory attention to external sounds. METHOD: The responsiveness of the auditory cortex was investigated by auditory evoked potentials (N100) during the simultaneous occurrence of hallucinations and external stimuli. Evoked potentials were computed separately for periods with and without hallucinations; N100 power, topography and brain electrical sources were analysed. RESULTS: Hallucinations lowered the N100 amplitudes and changed the topography, presumably due to a reduced left temporal responsivity. CONCLUSIONS: This finding indicates competition between auditory stimuli and hallucinations for physiological resources in the primary auditory cortex. The abnormal activation of the primary auditory cortex may thus be a constituent of auditory hallucinations.
Resumo:
BACKGROUND: In the UK, population screening for unmet need has failed to improve the health of older people. Attention is turning to interventions targeted at 'at-risk' groups. Living alone in later life is seen as a potential health risk, and older people living alone are thought to be an at-risk group worthy of further intervention. AIM: To explore the clinical significance of living alone and the epidemiology of lone status as an at-risk category, by investigating associations between lone status and health behaviours, health status, and service use, in non-disabled older people. Design of study: Secondary analysis of baseline data from a randomised controlled trial of health risk appraisal in older people. SETTING: Four group practices in suburban London. METHOD: Sixty per cent of 2641 community-dwelling non-disabled people aged 65 years and over registered at a practice agreed to participate in the study; 84% of these returned completed questionnaires. A third of this group, (n = 860, 33.1%) lived alone and two-thirds (n = 1741, 66.9%) lived with someone else. RESULTS: Those living alone were more likely to report fair or poor health, poor vision, difficulties in instrumental and basic activities of daily living, worse memory and mood, lower physical activity, poorer diet, worsening function, risk of social isolation, hazardous alcohol use, having no emergency carer, and multiple falls in the previous 12 months. After adjustment for age, sex, income, and educational attainment, living alone remained associated with multiple falls, functional impairment, poor diet, smoking status, risk of social isolation, and three self-reported chronic conditions: arthritis and/or rheumatism, glaucoma, and cataracts. CONCLUSION: Clinicians working with independently-living older people living alone should anticipate higher levels of disease and disability in these patients, and higher health and social risks, much of which will be due to older age, lower educational status, and female sex. Living alone itself appears to be associated with higher risks of falling, and constellations of pathologies, including visual loss and joint disorders. Targeted population screening using lone status may be useful in identifying older individuals at high risk of falling.
Resumo:
Therapeutic alliance between clinicians and their patients is important in community mental healthcare. It is unclear whether providing effective interventions influences therapeutic alliance.