30 resultados para 1177
em Duke University
Resumo:
Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.
Resumo:
This study explored the factors associated with state-level allocations to tobacco-control programs. The primary research question was whether public sentiment regarding tobacco control was a significant factor in the states' 2001 budget decisions. In addition to public opinion, several additional political and economic measures were considered. Significant associations were found between our outcome, state-level tobacco-control funding per capita, and key variables of interest including public opinion, amount of tobacco settlement received, the party affiliation of the governor, the state's smoking rate, excise tax revenue received, and whether the state was a major producer of tobacco. The findings from this study supported our hypothesis that states with citizens who favor more restrictive indoor air policies allocate more to tobacco control. Effective public education to change public opinion and the cultural norms surrounding smoking may affect political decisions and, in turn, increase funding for crucial public health programs.
Resumo:
Gender-based violence increases a woman's risk for HIV but little is known about her decision to get tested. We interviewed 97 women seeking abuse-related services from a nongovernmental organization (NGO) in Johannesburg, South Africa. Forty-six women (47%) had been tested for HIV. Caring for children (odds ratio [OR] = 0.27, 95% confidence interval [CI] = [0.07, 1.00]) and conversing with partner about HIV (OR = 0.13, 95% CI = [0.02, 0.85]) decreased odds of testing. Stronger risk-reduction intentions (OR = 1.27, 95% CI = [1.01, 1.60]) and seeking help from police (OR = 5.51, 95% CI = [1.18, 25.76]) increased odds of testing. Providing safe access to integrated services and testing may increase testing in this population. Infection with HIV is highly prevalent in South Africa where an estimated 16.2% of adults between the ages of 15 and 49 have the virus. The necessary first step to stemming the spread of HIV and receiving life-saving treatment is learning one's HIV serostatus through testing. Many factors may contribute to someone's risk of HIV infection and many barriers may prevent testing. One factor that does both is gender-based violence.
Resumo:
In 1995, Crawford and Ostrom proposed a grammatical syntax for examining institutional statements (i.e., rules, norms, and strategies) as part of the institutional analysis and development framework. This article constitutes the first attempt at applying the grammatical syntax to code institutional statements using two pieces of U.S. legislation. The authors illustrate how the grammatical syntax can serve as a basis for collecting, presenting, and analyzing data in a way that is reliable and conveys valid and substantive meaning for the researcher. The article concludes by describing some implementation challenges and ideas for future theoretical and field research. © 2010 University of Utah.
Resumo:
Most studies that apply qualitative comparative analysis (QCA) rely on macro-level data, but an increasing number of studies focus on units of analysis at the micro or meso level (i.e., households, firms, protected areas, communities, or local governments). For such studies, qualitative interview data are often the primary source of information. Yet, so far no procedure is available describing how to calibrate qualitative data as fuzzy sets. The authors propose a technique to do so and illustrate it using examples from a study of Guatemalan local governments. By spelling out the details of this important analytic step, the authors aim at contributing to the growing literature on best practice in QCA. © The Author(s) 2012.
Resumo:
BACKGROUND: Anterior cruciate ligament (ACL) reconstruction is associated with a high incidence of second tears (graft tears and contralateral ACL tears). These secondary tears have been attributed to asymmetrical lower extremity mechanics. Knee bracing is one potential intervention that can be used during rehabilitation that has the potential to normalize lower extremity asymmetry; however, little is known about the effect of bracing on movement asymmetry in patients following ACL reconstruction. HYPOTHESIS: Wearing a knee brace would increase knee joint flexion and joint symmetry. It was also expected that the joint mechanics would become more symmetrical in the braced condition. OBJECTIVE: To examine how knee bracing affects knee joint function and symmetry over the course of rehabilitation in patients 6 months following ACL reconstruction. STUDY DESIGN: Controlled laboratory study. LEVEL OF EVIDENCE: Level 3. METHODS: Twenty-three adolescent patients rehabilitating from ACL reconstruction surgery were recruited for the study. The subjects all underwent a motion analysis assessment during a stop-jump activity with and without a functional knee brace on the surgical side that resisted extension for 6 months following the ACL reconstruction surgery. Statistical analysis utilized a 2 × 2 (limb × brace) analysis of variance with a significant alpha level of 0.05. RESULTS: Subjects had increased knee flexion on the surgical side when they were braced. The brace condition increased knee flexion velocity, decreased the initial knee flexion angle, and increased the ground reaction force and knee extension moment on both limbs. Side-to-side asymmetry was present across conditions for the vertical ground reaction force and knee extension moment. CONCLUSION: Wearing a knee brace appears to increase lower extremity compliance and promotes normalized loading on the surgical side. CLINICAL RELEVANCE: Knee extension constraint bracing in postoperative ACL patients may improve symmetry of lower extremity mechanics, which is potentially beneficial in progressing rehabilitation and reducing the incidence of second ACL tears.
Resumo:
BACKGROUND: Ipsilateral hindfoot arthrodesis in combination with total ankle replacement (TAR) may diminish functional outcome and prosthesis survivorship compared to isolated TAR. We compared the outcome of isolated TAR to outcomes of TAR with ipsilateral hindfoot arthrodesis. METHODS: In a consecutive series of 404 primary TARs in 396 patients, 70 patients (17.3%) had a hindfoot fusion before, after, or at the time of TAR; the majority had either an isolated subtalar arthrodesis (n = 43, 62%) or triple arthrodesis (n = 15, 21%). The remaining 334 isolated TARs served as the control group. Mean patient follow-up was 3.2 years (range, 24-72 months). RESULTS: The SF-36 total, AOFAS Hindfoot-Ankle pain subscale, Foot and Ankle Disability Index, and Short Musculoskeletal Function Assessment scores were significantly improved from preoperative measures, with no significant differences between the hindfoot arthrodesis and control groups. The AOFAS Hindfoot-Ankle total, function, and alignment scores were significantly improved for both groups, albeit the control group demonstrated significantly higher scores in all 3 scales. Furthermore, the control group demonstrated a significantly greater improvement in VAS pain score compared to the hindfoot arthrodesis group. Walking speed, sit-to-stand time, and 4-square step test time were significantly improved for both groups at each postoperative time point; however, the hindfoot arthrodesis group completed these tests significantly slower than the control group. There was no significant difference in terms of talar component subsidence between the fusion (2.6 mm) and control groups (2.0 mm). The failure rate in the hindfoot fusion group (10.0%) was significantly higher than that in the control group (2.4%; p < 0.05). CONCLUSION: To our knowledge, this study represents the first series evaluating the clinical outcome of TARs performed with and without hindfoot fusion using implants available in the United States. At follow-up of 3.2 years, TAR performed with ipsilateral hindfoot arthrodesis resulted in significant improvements in pain and functional outcome; in contrast to prior studies, however, overall outcome was inferior to that of isolated TAR. LEVEL OF EVIDENCE: Level II, prospective comparative series.
Resumo:
BACKGROUND: The majority of total ankle arthroplasty (TAA) systems use extramedullary alignment guides for tibial component placement. However, at least 1 system offers intramedullary referencing. In total knee arthroplasty, studies suggest that tibial component placement is more accurate with intramedullary referencing. The purpose of this study was to compare the accuracy of extramedullary referencing with intramedullary referencing for tibial component placement in total ankle arthroplasty. METHODS: The coronal and sagittal tibial component alignment was evaluated on the postoperative weight-bearing anteroposterior (AP) and lateral radiographs of 236 consecutive fixed-bearing TAAs. Radiographs were measured blindly by 2 investigators. The postoperative alignment of the prosthesis was compared with the surgeon's intended alignment in both planes. The accuracy of tibial component alignment was compared between the extramedullary and intramedullary referencing techniques using unpaired t tests. Interrater and intrarater reliabilities were assessed with intraclass correlation coefficients (ICCs). RESULTS: Eighty-three tibial components placed with an extramedullary referencing technique were compared with 153 implants placed with an intramedullary referencing technique. The accuracy of the extramedullary referencing was within a mean of 1.5 ± 1.4 degrees and 4.1 ± 2.9 degrees in the coronal and sagittal planes, respectively. The accuracy of intramedullary referencing was within a mean of 1.4 ± 1.1 degrees and 2.5 ± 1.8 degrees in the coronal and sagittal planes, respectively. There was a significant difference (P < .001) between the 2 techniques with respect to the sagittal plane alignment. Interrater ICCs for coronal and sagittal alignment were high (0.81 and 0.94, respectively). Intrarater ICCs for coronal and sagittal alignment were high for both investigators. CONCLUSIONS: Initial sagittal plane tibial component alignment was notably more accurate when intramedullary referencing was used. Further studies are needed to determine the effect of this difference on clinical outcomes and long-term survivability of the implants. LEVEL OF EVIDENCE: Level III, retrospective comparative study.
Resumo:
Autobiographical memories of trauma victims are often described as disturbed in two ways. First, the trauma is frequently re-experienced in the form of involuntary, intrusive recollections. Second, the trauma is difficult to recall voluntarily (strategically); important parts may be totally or partially inaccessible-a feature known as dissociative amnesia. These characteristics are often mentioned by PTSD researchers and are included as PTSD symptoms in the DSM-IV-TR (American Psychiatric Association, 2000). In contrast, we show that both involuntary and voluntary recall are enhanced by emotional stress during encoding. We also show that the PTSD symptom in the diagnosis addressing dissociative amnesia, trouble remembering important aspects of the trauma is less well correlated with the remaining PTSD symptoms than the conceptual reversal of having trouble forgetting important aspects of the trauma. Our findings contradict key assumptions that have shaped PTSD research over the last 40 years.
Resumo:
We devised three measures of the general severity of events, which raters applied to participants' narrative descriptions: 1) placing events on a standard normed scale of stressful events, 2) placing events into five bins based on their severity relative to all other events in the sample, and 3) an average of ratings of the events' effects on six distinct areas of the participants' lives. Protocols of negative events were obtained from two non-diagnosed undergraduate samples (n = 688 and 328), a clinically diagnosed undergraduate sample all of whom had traumas and half of whom met PTSD criteria (n = 30), and a clinically diagnosed community sample who met PTSD criteria (n = 75). The three measures of severity correlated highly in all four samples but failed to correlate with PTSD symptom severity in any sample. Theoretical implications for the role of trauma severity in PTSD are discussed.
Resumo:
We examined the frequency and impact of exposure to potentially traumatic events among a nonclinical sample of older adults (n = 3,575), a population typically underrepresented in epidemiological research concerning the prevalence of traumatic events. Current PTSD symptom severity and the centrality of events to identity were assessed for events nominated as currently most distressing. Approximately 90% of participants experienced one or more potentially traumatic events. Events that occurred with greater frequency early in the life course were associated with more severe PTSD symptoms compared to events that occurred with greater frequency during later decades. Early life traumas, however, were not more central to identity. Results underscore the differential impact of traumatic events experienced throughout the life course. We conclude with suggestions for further research concerning mechanisms that promote the persistence of post-traumatic stress related to early life traumas and empirical evaluation of psychotherapeutic treatments for older adults with PTSD.
Resumo:
In the study reported here, we examined posttraumatic stress disorder (PTSD) symptoms in 746 Danish soldiers measured on five occasions before, during, and after deployment to Afghanistan. Using latent class growth analysis, we identified six trajectories of change in PTSD symptoms. Two resilient trajectories had low levels across all five times, and a new-onset trajectory started low and showed a marked increase of PTSD symptoms. Three temporary-benefit trajectories, not previously described in the literature, showed decreases in PTSD symptoms during (or immediately after) deployment, followed by increases after return from deployment. Predeployment emotional problems and predeployment traumas, especially childhood adversities, were predictors for inclusion in the nonresilient trajectories, whereas deployment-related stress was not. These findings challenge standard views of PTSD in two ways. First, they show that factors other than immediately preceding stressors are critical for PTSD development, with childhood adversities being central. Second, they demonstrate that the development of PTSD symptoms shows heterogeneity, which indicates the need for multiple measurements to understand PTSD and identify people in need of treatment.
Resumo:
BACKGROUND: Diagnostic imaging represents the fastest growing segment of costs in the US health system. This study investigated the cost-effectiveness of alternative diagnostic approaches to meniscus tears of the knee, a highly prevalent disease that traditionally relies on MRI as part of the diagnostic strategy. PURPOSE: To identify the most efficient strategy for the diagnosis of meniscus tears. STUDY DESIGN: Economic and decision analysis; Level of evidence, 1. METHODS: A simple-decision model run as a cost-utility analysis was constructed to assess the value added by MRI in various combinations with patient history and physical examination (H&P). The model examined traumatic and degenerative tears in 2 distinct settings: primary care and orthopaedic sports medicine clinic. Strategies were compared using the incremental cost-effectiveness ratio (ICER). RESULTS: In both practice settings, H&P alone was widely preferred for degenerative meniscus tears. Performing MRI to confirm a positive H&P was preferred for traumatic tears in both practice settings, with a willingness to pay of less than US$50,000 per quality-adjusted life-year. Performing an MRI for all patients was not preferred in any reasonable clinical scenario. The prevalence of a meniscus tear in a clinician's patient population was influential. For traumatic tears, MRI to confirm a positive H&P was preferred when prevalence was less than 46.7%, with H&P preferred above that. For degenerative tears, H&P was preferred until the prevalence reaches 74.2%, and then MRI to confirm a negative was the preferred strategy. In both settings, MRI to confirm positive physical examination led to more than a 10-fold lower rate of unnecessary surgeries than did any other strategy, while MRI to confirm negative physical examination led to a 2.08 and 2.26 higher rate than H&P alone in primary care and orthopaedic clinics, respectively. CONCLUSION: For all practitioners, H&P is the preferred strategy for the suspected degenerative meniscus tear. An MRI to confirm a positive H&P is preferred for traumatic tears for all practitioners. Consideration should be given to implementing alternative diagnostic strategies as well as enhancing provider education in physical examination skills to improve the reliability of H&P as a diagnostic test. CLINICAL RELEVANCE: Alternative diagnostic strategies that do not include the use of MRI may result in decreased health care costs without harm to the patient and could possibly reduce unnecessary procedures.
Resumo:
OBJECTIVES: Our objectives were to: 1) describe patient-reported communication with their provider and explore differences in perceptions of racially diverse adherent versus nonadherent patients; and 2) examine whether the association between unanswered questions and patient-reported medication nonadherence varied as a function of patients' race. METHODS: We conducted a cross-sectional analysis of baseline in-person survey data from a trial designed to improve postmyocardial infarction management of cardiovascular disease risk factors. RESULTS: Overall, 298 patients (74%) reported never leaving their doctor's office with unanswered questions. Among those who were adherent and nonadherent with their medications, 183 (79%) and 115 (67%) patients, respectively, never left their doctor's office with unanswered questions. In multivariable logistic regression, although the simple effects of the interaction term were different for patients of nonminority race (odds ratio [OR]: 2.16; 95% confidence interval [CI]: 1.19-3.92) and those of minority race (OR: 1.19; 95% CI: 0.54-2.66), the overall interaction effect was not statistically significant (P=0.24). CONCLUSION: The quality of patient-provider communication is critical for cardiovascular disease medication adherence. In this study, however, having unanswered questions did not impact medication adherence differently as a function of patients' race. Nevertheless, there were racial differences in medication adherence that may need to be addressed to ensure optimal adherence and health outcomes. Effort should be made to provide training opportunities for both patients and their providers to ensure strong communication skills and to address potential differences in medication adherence in patients of diverse backgrounds.