916 resultados para Dactylitis severity score
Resumo:
OBJECTIVES In 2010, the American College of Rheumatology (ACR) proposed new criteria for the diagnosis of fibromyalgia (FM) in the context of objections to components of the criteria of 1990. The new criteria consider the Widespread Pain Index (WPI) and the Symptom Severity Score (SSS). This study evaluated the implications of the new diagnostic criteria for FM across other functional pain syndromes. METHOD A cohort of 300 consecutive in-patients with functional pain syndromes underwent a diagnostic screen according to the ACR 2010 criteria. Additionally, systematic pain assessment including algometric and psychometric data was carried out. RESULTS Twenty-five patients (8.3%) had been diagnosed with FM according to the ACR 1990 criteria. Twenty-one of them (84%) also met the new ACR 2010 criteria. In total, 130 patients (43%) fulfilled the new ACR 2010 criteria. A comparison of new vs. old cases showed a high degree of conformity in most of the pain characteristics. The new FM cases, however, revealed a pronounced heterogeneity in the anatomical pain locations, including several types of localized pain syndromes. Furthermore, patients fulfilling the ACR 2010 FM criteria differed from those with other functional pain syndromes; they had increased pain sensitivity scores and increased psychometric values for depression, anxiety, and psychological distress (p<0.01). CONCLUSIONS FM according to the ACR 2010 criteria describes the 'severe half' of the spectrum of functional pain syndromes. By dropping the requirement of 'generalized pain', these criteria result in a blurring of the distinction between FM and more localized functional pain syndromes.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
QUESTIONS UNDER STUDY Many persons are travelling all over the world; the elderly with pre-existing diseases also travel to places with less developed health systems. Reportedly, fewer than 0.5% of all travellers need repatriation. We aimed to analyse and examine people who are injured or ill while abroad, where they travelled to and by what means they were repatriated. METHODS Retrospective cross-sectional study with adult patients repatriated to a single level 1 trauma centre in Switzerland (2000-2011). RESULTS A total of 372 patients were repatriated, with an increasing trend per year. Of these, 67% were male; the median age was 56 years. Forty-nine percent sustained an injury, and 13% had surgical and 38% medical pathologies. Patients with medical conditions were older than those with injuries or surgical emergencies (p <0.001). Seventy-three percent were repatriated from Europe. For repatriation from Africa trauma was slightly more frequent (53%, n = 17) than illnesses, whereas for most other countries illnesses and trauma were equally distributed. Injured patients had a median Injury Severity Score of 8. The majority of illnesses involved the nervous system (38%), mainly stroke. Forty-five percent were repatriated by Swiss Air Ambulance, 26% by ground ambulance, 18% by scheduled flights with or without medical assistance and two patients injured near the Swiss boarder by helicopter. The 28-day mortality was 4%. CONCLUSIONS The numbers of travellers repatriated increased from 2000 to 2011. About half were due to illnesses and half due to injuries. The largest group were elderly Swiss nationals repatriated from European countries. As mortality is relatively high, special consideration to this group of patients is warranted.
Resumo:
The purpose of this study was to determine, for penetrating injuries (gunshot, stab) of the chest/abdomen, the impact on fatality of treatment in trauma centers and shock trauma units compared with general hospitals. Medical records of all cases of penetrating injury limited to chest/abdomen and admitted to and discharged from 7 study facilities in Baltimore city 1979-1980 (n = 581) were studied: 4 general hospitals (n = 241), 2 area-wide trauma centers (n = 298), and a shock trauma unit (n = 42). Emergency center and transferred cases were not studied. Anatomical injury severity, measured by modified Injury Severity Score (mISS), was a significant prognostic factor for death, as were cardiovascular shock (SBP $\le$ 70), injury type (gunshot vs stab), and ambulance/helicopter (vs other) transport. All deaths occurred in cases with two or more prognostic factors. Unadjusted relative risks of death compared with general hospitals were 4.3 (95% confidence interval = 2.2, 8.4) for shock trauma and 0.8 (0.4, 1.7) for trauma centers. Controlling for prognostic factors by logistic regression resulted in these relative risks: shock trauma 4.0 (0.7, 22.2), and trauma centers 0.8 (0.2, 3.2). Factors significantly associated with increased risk had the following relative risks by multiple logistic regression: SBP $\le$ 70 (RR = 40.7 (11.0, 148.7)), highest mISS (42 (7.7, 227)), gunshot (8.4 (2.1, 32.6)), and ambulance/helicopter transport (17.2 (1.3, 228.1)). Controlling for age, race, and gender did not alter results significantly. Actual deaths compared with deaths predicted from a multivariable model of general-hospital cases showed 3.7 more than predicted deaths in shock trauma (SMR = 1.6 (0.8, 2.9)) and 0.7 more than predicted deaths in area-wide trauma centers (SMR = 1.05 (0.6, 1.7)). Selection bias due to exclusion of transfers and emergency center cases, and residual confounding due to insufficient injury information, may account for persistence of adjusted high case fatality in shock trauma. Studying all cases prospectively, including emergency center and transferred cases, is needed. ^
Resumo:
This research examined the relationship between concomitant non-CDI antibiotic use and complications arising due to Clostridium difficile infection. To observe the hypothesized association, 160 total CDI patients between the ages of 50-90 were selected, 80 exposed to concomitant antibiotics and 80 unexposed. Samples were matched based upon their age and Horn's index, a severity score for underlying illness. Patients were de-identified by a third party, and analyzed retrospectively for differences between the two groups. In addition, patients exposed to broad spectrum antibiotics at the time of CDI treatment were further studied to demonstrate whether antibiotics had any effect on CDI complications. Between the two groups, the outcomes of interest (recurrent CDI, refractory CDI, mortality, ICU stay, and length of hospitalization) were not associated with concomitant antibiotic use at the time of CDI therapy. However, within the exposed population, certain classes of antibiotics such as cephalosporin, antifungals, and tetracyclines were more common in patients compared to other types of therapy. In addition, days of therapy provided evidence that sustained use of antibiotics affected CDI (p = 0.08), although a more robust sample size and additional study would be needed. Finally, refractory CDI was found to be potentially overestimated within the exposed population due to the possibility of antibiotic-associated diarrhea.^
Resumo:
Objetivos: Avaliar a capacidade funcional de pacientes vítimas de trauma um ano após alta hospitalar e verificar associação da capacidade funcional com fatores relacionados ao trauma e à internação hospitalar. Metodologia: Estudo de coorte prospectivo, com pacientes vítimas de trauma grave (Injury Severity Score - ISS >=16), internados entre Junho e Setembro de 2010 em unidade de terapia intensiva (UTI) cirúrgica especializada em paciente politraumatizado de um hospital público de grande porte na cidade de São Paulo, Brasil. Variáveis de interesse como idade, sexo, escore de Glasgow, Acute Physiology and Chronic Health Disease Classification System II (APACHE II), mecanismos de trauma, número de lesões, região corpórea afetada, número de cirurgias, duração da ventilação mecânica (VM) e tempo de internação hospitalar foram coletadas dos prontuários médicos. A capacidade funcional foi avaliada um ano após alta hospitalar utilizando as escalas Glasgow Outcome Scale (GOS) e Escala de Atividades Instrumentais de Vida Diária de Lawton (AIVDL). Os pacientes também foram questionados se haviam retornado ao trabalho ou estudo. Resultados: O seguimento um ano após trauma foi completo em 49 indivíduos, a maioria composta por jovens (36±11 anos), do sexo masculino (81,6%) e vítimas de acidentes de trânsito (71,5%). Cada indivíduo sofreu aproximadamente 4 lesões corporais, acarretando uma média no ISS de 31 ± 14,4. O traumatismo cranioencefálico foi o tipo de lesão mais comum (65,3%). De acordo com a GOS, a maioria dos pacientes apresentou disfunção moderada (43%) ou disfunção leve ou ausente (37%) um ano após o trauma. A escala AIVDL apresentou pontuação média de 12±4 com aproximadamente 60- 70% dos indivíduos capazes de realizar de forma independente a maioria das atividades avaliadas. Escore de Glasgow, APACHE II, duração da VM e tempo de internação hospitalar foram associadas com a capacidade funcional um ano após lesão. A regressão linear múltipla considerando todas as variáveis significativas revelou associação entre a pontuação da escala AIVDL e o tempo de internação hospitalar. Apenas 32,6% dos indivíduos retornaram ao trabalho ou estudo. Conclusões: A maioria dos pacientes vítimas de trauma grave foi capaz de realizar as atividades avaliadas com independência; apenas um terço deles retornou ao trabalho e/ou estudo um ano após alta hospitalar. O tempo de internação hospitalar foi revelado como preditor significativo para a recuperação da capacidade funcional um ano após lesão grave
Resumo:
A redução da mortalidade é um objetivo fundamental das unidades de terapia intensiva pediátrica (UTIP). O estágio de gravidade da doença reflete a magnitude das comorbidades e distúrbios fisiológicos no momento da internação e pode ser avaliada pelos escores prognósticos de mortalidade. Os dois principais escores utilizados na UTIP são o Pediatric Risk of Mortality (PRISM) e o Pediatric Index of Mortality (PIM). O PRISM utiliza os piores valores de variáveis fisiológicas e laboratoriais nas primeiras 24 horas de internação enquanto o PIM2 utiliza dados da primeira hora de internação na UTIP e apenas uma gasometria arterial. Não há consenso na literatura, entre PRISM e PIM2, quanto à utilidade e padronização na admissão na terapia intensiva para as crianças e adolescentes, principalmente em uma UTI de nível de atendimento terciário. O objetivo do estudo foi estabelecer o escore de melhor performance na avaliação do prognóstico de mortalidade que seja facilmente aplicável na rotina da UTIP, para ser utilizado de forma padronizada e contínua. Foi realizado um estudo retrospectivo onde foram revisados os escores PRISM e PIM2 de 359 pacientes internados na unidade de terapia intensiva pediátrica do Instituto da Criança do Hospital das Clínicas da Faculdade de Medicina da USP, considerada uma unidade de atendimento de nível terciário. A mortalidade foi de 15%, o principal tipo de admissão foi clinico (78%) sendo a principal causa de internação a disfunção respiratória (37,3%). Os escores dos pacientes que foram a óbito mostraram-se maiores do que o dos sobreviventes. Para o PRISM foi 15 versus 7 (p = 0,0001) e para o PIM2, 11 versus 5 (p = 0,0002), respectivamente. Para a amostra geral, o Standardized Mortality Ratio (SMR) subestimou a mortalidade tanto para o PIM2 quanto para o PRISM [1,15 (0,84 - 1,46) e 1,67 (1,23 - 2,11), respectivamente]. O teste de Hosmer-Lemeshow mostrou calibração adequada para ambos os escores [x2 = 12,96 (p = 0,11) para o PRISM e x2 = 13,7 (p = 0,09) para o PIM2]. A discriminação, realizada por meio da área sob a curva ROC, foi mais adequada para o PRISM do que para o PIM2 [0,76 (IC 95% 0,69 - 0,83) e 0,65 (IC 95% 0,57 - 0,72), respectivamente, p= 0,002]. No presente estudo, a melhor sensibilidade e especificidade para o risco de óbito do PRISM foi um escore entre 13 e 14, mostrando que, com o avanço tecnológico, o paciente precisa ter um escore mais elevado, ou seja, maior gravidade clínica do que a população original, para um maior risco de mortalidade. Os escores de gravidade podem ter seus resultados modificados em consequência: do sistema de saúde (público ou privado), da infraestrutura da UTIP (número de leitos, recursos humanos, parque tecnológico) e indicação da internação. A escolha de um escore de gravidade depende das características individuais da UTIP, como o tempo de espera na emergência, presença de doença crônica complexa (por exemplo, pacientes oncológicos) e como é realizado o transporte para a UTIP. Idealmente, estudos multicêntricos têm maior significância estatística. No entanto, estudos com populações maiores e mais homogêneas, especialmente nos países em desenvolvimento, são difíceis de serem realizados
Resumo:
THE AIM OF THE STUDY There are limited data on blood pressure targets and vasopressor use following cardiac arrest. We hypothesized that hypotension and high vasopressor load are associated with poor neurological outcome following out-of-hospital cardiac arrest (OHCA). METHODS We included 412 patients with OHCA included in FINNRESUSCI study conducted between 2010 and 2011. Hemodynamic data and vasopressor doses were collected electronically in one, two or five minute intervals. We evaluated thresholds for time-weighted (TW) mean arterial pressure (MAP) and outcome by receiver operating characteristic (ROC) curve analysis, and used multivariable analysis adjusting for co-morbidities, factors at resuscitation, an illness severity score, TW MAP and total vasopressor load (VL) to test associations with one-year neurologic outcome, dichotomized into either good (1-2) or poor (3-5) according to the cerebral performance category scale. RESULTS Of 412 patients, 169 patients had good and 243 patients had poor one-year outcomes. The lowest MAP during the first six hours was 58 (inter-quartile range [IQR] 56-61) mmHg in those with a poor outcome and 61 (59-63) mmHg in those with a good outcome (p<0.01), and lowest MAP was independently associated with poor outcome (OR 1.02 per mmHg, 95% CI 1.00-1.04, p=0.03). During the first 48h the median (IQR) of the TW mean MAP was 80 (78-82) mmHg in patients with poor, and 82 (81-83) mmHg in those with good outcomes (p=0.03) but in multivariable analysis TWA MAP was not associated with outcome. Vasopressor load did not predict one-year neurologic outcome. CONCLUSIONS Hypotension occurring during the first six hours after cardiac arrest is an independent predictor of poor one-year neurologic outcome. High vasopressor load was not associated with poor outcome and further randomized trials are needed to define optimal MAP targets in OHCA patients.
Resumo:
Background: Treatment of bulky retroperitoneal malignancy may require en bloc resection of the infrarenal inferior vena cava. A number of reconstructive options are available to the surgeon but objective haemodynamic assessment of the peripheral venous system following resection without replacement is lacking. The aim of the present paper was thus to determine the symptomatic and haemodynamic effects of not reconstructing the resected infrarenal inferior vena cava. Methods: A retrospective descriptive study was carried out at Princess Alexandra Hospital in Queensland. Five patients underwent resection of the thrombosed infrarenal inferior vena cava as part of retroperitoneal lymph node dissection for testicular cancer (n = 3), radical nephrectomy for renal cell carcinoma (n = 1) and thrombosed inferior vena cava aneurysm (n = 1). Clinical effects were determined via the modified venous clinical severity score and venous disability score. Haemodynamic data were obtained postoperatively using venous duplex ultrasound and air plethysmography. Results: None of the present patients scored >2 (out of 30) on the modified venous clinical severity score or >1 (out of 3) on the venous disability score. Haemodynamic studies showed only minor abnormalities. Conclusions: Not reconstructing the resected thrombosed infrarenal inferior vena cava results in minor signs and symptoms of peripheral venous hypertension and only minor abnormalities on haemodynamic assessment.
Resumo:
Circulating antiangiogenic factors and proinflammatory cytokines are implicated in the pathogenesis of preeclampsia. This study was performed to test the hypothesis that steroids modify the balance of inflammatory and proangiogenic and antiangiogenic factors that potentially contribute to the patient’s evolving clinical state. Seventy singleton women, admitted for antenatal corticosteroid treatment, were enrolled prospectively. The study group consisted of 45 hypertensive women: chronic hypertension (n=6), severe preeclampsia (n=32), and superimposed preeclampsia (n=7). Normotensive women with shortened cervix (<2.5 cm) served as controls (n=25). Maternal blood samples of preeclampsia cases were obtained before steroids and then serially up until delivery. A clinical severity score was designed to clinically monitor disease progression. Serum levels of angiogenic factors (soluble fms-like tyrosine kinase-1 [sFlt-1], placental growth factor [PlGF], soluble endoglin [sEng]), endothelin-1 (ET-1), and proinflammatory markers (IL-6, C-reactive protein [CRP]) were assessed before and after steroids. Soluble IL-2 receptor (sIL-2R) and total immunoglobulins (IgG) were measured as markers of T- and B-cell activation, respectively. Steroid treatment coincided with a transient improvement in clinical manifestations of preeclampsia. A significant decrease in IL-6 and CRP was observed although levels of sIL-2R and IgG remained unchanged. Antenatal corticosteroids did not influence the levels of angiogenic factors but ET-1 levels registered a short-lived increase poststeroids. Although a reduction in specific inflammatory mediators in response to antenatal steroids may account for the transient improvement in clinical signs of preeclampsia, inflammation is unlikely to be the major contributor to severe preeclampsia or useful for therapeutic targeting.
Resumo:
Circulating antiangiogenic factors and proinflammatory cytokines are implicated in the pathogenesis of preeclampsia. This study was performed to test the hypothesis that steroids modify the balance of inflammatory and proangiogenic and antiangiogenic factors that potentially contribute to the patient's evolving clinical state. Seventy singleton women, admitted for antenatal corticosteroid treatment, were enrolled prospectively. The study group consisted of 45 hypertensive women: chronic hypertension (n=6), severe preeclampsia (n=32), and superimposed preeclampsia (n=7). Normotensive women with shortened cervix (<2.5 cm) served as controls (n=25). Maternal blood samples of preeclampsia cases were obtained before steroids and then serially up until delivery. A clinical severity score was designed to clinically monitor disease progression. Serum levels of angiogenic factors (soluble fms-like tyrosine kinase-1 [sFlt-1], placental growth factor [PlGF], soluble endoglin [sEng]), endothelin-1 (ET-1), and proinflammatory markers (IL-6, C-reactive protein [CRP]) were assessed before and after steroids. Soluble IL-2 receptor (sIL-2R) and total immunoglobulins (IgG) were measured as markers of T- and B-cell activation, respectively. Steroid treatment coincided with a transient improvement in clinical manifestations of preeclampsia. A significant decrease in IL-6 and CRP was observed although levels of sIL-2R and IgG remained unchanged. Antenatal corticosteroids did not influence the levels of angiogenic factors but ET-1 levels registered a short-lived increase poststeroids. Although a reduction in specific inflammatory mediators in response to antenatal steroids may account for the transient improvement in clinical signs of preeclampsia, inflammation is unlikely to be the major contributor to severe preeclampsia or useful for therapeutic targeting. © 2014 American Heart Association, Inc.
Resumo:
The effect of unethical behaviors in health care settings is an important issue in the safe care of clients and has been a concern of the nursing profession for some time. The purpose of this study was to examine the relationship between use of unethical behaviors in the nursing student experience and the use of unethical behaviors in the workplace as a registered nurse. In addition, the relationship between the severity of unethical behaviors utilized in the classroom, clinical setting and those in the workplace was examined. To insure greater honesty in self-report, only a limited number of demographic variables were requested from participants.^ During the summer of 1997, a 56 item questionnaire was distributed to registered nurses enrolled in either undergraduate or graduate courses in a public or private institution. The participants were asked to self-report their own use of unethical behaviors as well as their peers use of unethical behaviors. In order to assign a severity score for each item, nursing school faculty were asked to rate severity of unethical behaviors which could be used during the nursing student experience and nursing administrators were asked to rate unethical behaviors which could be used in the workplace.^ A significant positive relationship was found between individuals' use of unethical behaviors during nursing school and those used in the workplace $r=.630.$ A significant positive relationship was found between the severity of unethical behaviors used in the nursing student experience and the severity of unethical behaviors used in the workplace $r=.637.$ No relationship was found between years of practice, type of initial nursing education and whether or not the participant was raised inside or outside the United States and the use of unethical behaviors. ^
Resumo:
The effect of unethical behaviors in health care settings is an important issue in the safe care of clients and has been a concern of the nursing profession for some time. The purpose of this study was to examine the relationship between use of unethical behaviors in the nursing student experience and the use of unethical behaviors in the workplace as a registered nurse. In addition, the relationship between the severity of unethical behaviors utilized in the classroom, clinical setting and those in the workplace was examined. To insure greater honesty in self-report, only a limited umber of demographic variables were requested from participants. During the summer of 1997, a 56 item questionnaire was distributed to registered nurses enrolled in either undergraduate or graduate courses in a public or private institution. The participants were asked to self-report their own use of unethical behaviors as well as their peers use of unethical behaviors. In order to assign a severity score for each item, nursing school faculty were asked to rate severity of unethical behaviors which could be used during the nursing student experience and nursing administrators were asked to rate unethical behaviors which could be used in the workplace. A significant positive relationship was found between individuals' use of unethical behaviors during nursing school and those used in the workplace r = .630. A significant positive relationship was found between the severity of unethical behaviors used in the nursing student experience and the severity of unethical behaviors used in the workplace r = .637. No relationship was found between years of practice, type of initial nursing education and whether or not the participant was raised inside or outside the United States and the use of unethical behaviors.
Resumo:
Preterm birth is a public health problem worldwide. It holds growing global incidence rates, high mortality rates and a risk of the long-term sequelae in the newborn. It is also poses burden on the family and society. Mothers of very low birth weight (VLBW) preterm infants may develop psychological disorders, and impaired quality of life (QoL). Factors related to mothers and children in the postpartum period may be negatively associated with the QoL of these mothers. The aim of this study was to assess factors possibly associated with the QoL of mothers of VLBW preterm newborns during the first three years after birth. Mothers of VLBW preterm answered the World Health Organization Quality of Life (WHOQOL)-bref and the Beck Depression Inventory (BDI) in five time points up to 36 months postpartum, totalizing 260 observations. The WHOQOL–bref scores were compared and correlated with sociodemographic and clinical variables of mothers and children at discharge (T0) and at six (T1), twelve (T2), 24 (T3) and 36 (T4) months after the delivery. We used the Kruskal Wallis test to compared scores across different time points and correlated WHOQOL-bref scores with the sociodemographic and clinical variables of mothers and preterm infants. Multiple linear regression models were used to evaluate the contribution of these variables for the QoL of mothers. The WHOQOL–bref scores at T1 and T2 were higher when compared to scores in T0 in the physical health dimension (p = 0.013). BDI scores were also higher at T1 and T2 than those at T0 (p = 0.027). Among the maternal variables that contributed most to the QoL of mothers, there were: at T0, stable marital union (b= 13.60; p= 0.000) on the social relationships dimension, gestational age (b= 2.38; p= 0.010) in the physical health dimension; post-hemorrhagic hydrocephalus (b= -10.05; p= 0.010; b= -12.18; p= 0.013, respectively) in the psychological dimension; at T1 and T2, Bronchopulmonary dysplasia (b= -7.41; p= 0.005) and female sex (b= 8,094; p= 0.011) in the physical health dimension and environment, respectively. At T3, family income (b= -12.75’ p= 0.001) in the environment dimension, the SNAPPE neonatal severity score (b= -0.23; p= 0.027) on the social relationships dimension; at the T4, evangelical religion (b= 8.11; p= 0.019) and post-hemorrhagic hydrocephalus (b: -18.84 p: 0.001) on the social relationships dimension. The BDI scores were negatively associated with WHOQOL scores in all dimensions and at all times points: (-1.42 ≤ b ≤ -0.36; T0, T1, T2, T3 and T4). We conclude that mothers of preterm infants VLBW tend to have a transient improvement in the physical well-being during the first postpartum year. Their quality of life seems to return to levels at discharge between two and three years after delivery. The presence of maternal depressive symptoms and diagnosis of post-hemorrhagic hydrocephalus or BDP are factors negatively associated with the QoL of mothers. Social, religious and economic variables are positively associated with the QoL of mothers of VLBW preterm.
Resumo:
The VRAG-R is designed to assess the likelihood of violent or sexual reoffending among male offenders. The data set comprises demographic, criminal history, psychological assessment, and psychiatric information about the offenders gathered from institutional files together with post-release recidivism information. The VRAG-R is a twelve-item actuarial instrument and the scores on these items form part of the data set. Because one of the goals of the VRAG-R development project was to compare the VRAG-R to the VRAG, subjects' VRAG scores are included in this data set. Access to the VRAG-R dataset is restricted. Contact Data Services, Queen's University Library (academic.services@queensu.ca) for access.