983 resultados para Risk-taking (Psychology) in adolescence -- United States.
Resumo:
This report demonstrates that religion among U.S. adolescents is positively related to participation in constructive youth activities. In addition, those who participate in religious activities seem to be less likely to participate in many delinquent and risk behaviors.
Resumo:
Multiple measures have been devised by clinicians and theorists from many different backgrounds for the purpose of assessing the influence of the frontal lobes on behaviour. Some utilize self-report measures to investigate behavioural characteristics such as risktaking, sensation seeking, impulsivity, and sensitivity to reward and punishment in an attempt to understand complex human decision making. Others rely more on neuroimaging and electrophysiological investigation involving experimental tasks thought to demonstrate executive functions in action, while other researchers prefer to study clinical populations with selective damage. Neuropsychological models of frontal lobe functioning have led to a greater appreciation of the dissociations among various aspects of prefrontal cortex function. This thesis involves (1) an examination of various psychometric and experimental indices of executive functions for coherence as one would predict on the basis of highly developed neurophysiological models of prefrontal function, particularly those aspects of executive function that involve predominantly cognitive abilities versus processes characterized by affect regulation; and (2) investigation of the relations between risk-taking, attentional abilties and their associated characteristics using a neurophysiological model of prefrontal functions addressed in (1). Late adolescence is a stage in which the prefrontal cortices undergo intensive structural and functional maturational changes; this period also involves increases in levels of risky and sensation driven behaviours, as well as a hypersensitivity to reward and a reduction in inhibition. Consequently, late adolescence spears to represent an ideal developmental period in which to examine these decision-making behaviours due to the maximum variability of behavioural characteristics of interest. Participants were 45 male undergraduate 18- to 19-year olds, who completed a battery of measures that included self-report, experimental and behavioural measures designed to assess particular aspects of prefrontal and executive functioning. As predicted, factor analysis supported the grouping of executive process by type (either primarily cognitive or affective), conforming to the orbitofrontal versus dorsolateral typology; risk-taking and associated characteristics were associated more with the orbitofrontal than the dorsolateral factor, whereas attentional and planning abilities tended to correlate more strongly with the dorsolateral factor. Results are discussed in light of future assessment, investigation and understanding of complex human decision-making and executive functions. Implications, applications and suggestions for future research are also proposed.
Resumo:
Genetically engineered (GE) crops are subject to regulatory oversight to ensure their safety for humans and the environment. Their approval in the European Union (EU) starts with an application in a given Member State followed by a scientific step (risk assessment), and ends with a political decision-making step (risk management); and in the United States (US) it starts with a scientific (field trial) step and ends with a ‘bureaucratic’ decision-making step. We investigated trends for the time taken for these steps and the overall time taken for approving GE crops in the US and the EU (traders in these commodities). Results show that from 1996-2015 the overall time trend for approval in the EU decreased and then flattened off, with an overall mean completion-time of 1,763 days. In the US in 1998 there was a break in the trend of the overall approval time: Initially, from 1988 until 1997 the trend decreased with a mean approval time of 1,321 days; from 1998-2015, the trend almost stagnated with a mean approval time of 2,467 days.
Resumo:
Background: Over the last few decades, the prevalence of young adults with disabilities (YAD) has steadily risen as a result of advances in medicine, clinical treatment, and biomedical technologythat enhanced their survival into adulthood. Despite investments in services, family supports, and insurance, they experience poor health status and barriers to successful transition into adulthood. Objectives: We investigated the collective roles of multi-faceted factors at intrapersonal, interpersonal and community levels within the social ecological framework on health related outcome including self-rated health (SRH) of YAD. The three specific aims are: 1) to examine sociodemographic differences and health insurance coverage in adolescence; 2) to investigate the role of social skills in relationships with family and peers developed in adolescence; and 3) to collectively explore the association of sociodemographic characteristics, social skills, and community participation in adolescence on SRH. Methods: Using longitudinal data (N=5,020) from the National Longitudinal Transition Study (NLTS2), we conducted multivariate logistic regression analyses to understand the association between insurance status as well as social skills in adolescence and YAD’s health related outcomes. Structural equation modeling (SEM) assessed the confluence of multi-faceted factors from the social ecological model that link to health in early adulthood. Results: Compared with YAD who had private insurance, YAD who had public health insurance in adolescence are at higher odds of experiencing poorer health related outcomes in self-rated health [adjusted odds ratio (aOR=2.89, 95% confidence interval (CI): 1.16, 7.23), problems with health (aOR=2.60, 95%CI: 1.26, 5.35), and missing social activities due to health problems (aOR=2.86, 95%CI: 1.39, 5.85). At the interpersonal level, overall social skills developed through relationship with family and peers in adolescence do not appear to have association with health related outcomes in early adulthood. Finally, at the community level, community participation in adolescence does not have an association with SRH in early adulthood. Conclusions: Having public health insurance coverage does not equate to good health. YAD need additional supports to achieve positive health outcomes. The findings in social skills and community participation suggest other potential factors may be at play for health related outcomes for YAD and the need for further investigation.
Resumo:
Approximately 45,000 individuals are hospitalized annually for burn treatment. Rehabilitation after hospitalization can offer a significant improvement in functional outcomes. Very little is known nationally about rehabilitation for burns, and practices may vary substantially depending on the region based on observed Medicare post-hospitalization spending amounts. This study was designed to measure variation in rehabilitation utilization by state of hospitalization for patients hospitalized with burn injury. This retrospective cohort study used nationally collected data over a 10-year period (2001 to 2010), from the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SIDs). Patients hospitalized for burn injury (n = 57,968) were identified by ICD-9-CM codes and were examined to see specifically if they were discharged immediately to inpatient rehabilitation after hospitalization (primary endpoint). Both unadjusted and adjusted likelihoods were calculated for each state taking into account the effects of age, insurance status, hospitalization at a burn center, and extent of burn injury by TBSA. The relative risk of discharge to inpatient rehabilitation varied by as much as 6-fold among different states. Higher TBSA, having health insurance, higher age, and burn center hospitalization all increased the likelihood of discharge to inpatient rehabilitation following acute care hospitalization. There was significant variation between states in inpatient rehabilitation utilization after adjusting for variables known to affect each outcome. Future efforts should be focused on identifying the cause of this state-to-state variation, its relationship to patient outcome, and standardizing treatment across the United States.
Resumo:
Objectives
The Ebola epidemic has received extensive media coverage since the first diagnosed cases of the virus in the US. We investigated risk perceptions of Ebola among individuals living in the US and measured their knowledge of the virus.
Method
US residents completed an online survey (conducted 14–18 November 2014) that assessed their Ebola knowledge and risk perceptions.
Results
Respondents who were more knowledgeable of Ebola perceived less risk of contracting the virus and were less worried about the virus, but also regarded Ebola as more serious than less knowledgeable respondents. The internet served as a major source of additional information among knowledgeable respondents.
Conclusion
The findings suggest that the provision of health information about Ebola may be effective in informing the public about Ebola risks and of preventive measures without curtailing the seriousness of the virus. Policymakers may seek to further exploit the internet as a means of delivering information about Ebola in the US and worldwide.
Resumo:
A cohort of 418 United States Air Force (USAF) personnel from over 15 different bases deployed to Morocco in 1994. This was the first study of its kind and was designed with two primary goals: to determine if the USAF was medically prepared to deploy with its changing mission in the new world order, and to evaluate factors that might improve or degrade USAF medical readiness. The mean length of deployment was 21 days. The cohort was 95% male, 86% enlisted, 65% married, and 78% white.^ This study shows major deficiencies indicating the USAF medical readiness posture has not fully responded to meet its new mission requirements. Lack of required logistical items (e.g., mosquito nets, rainboots, DEET insecticide cream, etc.) revealed a low state of preparedness. The most notable deficiency was that 82.5% (95% CI = 78.4, 85.9) did not have permethrin pretreated mosquito nets and 81.0% (95% CI = 76.8, 84.6) lacked mosquito net poles. Additionally, 18% were deficient on vaccinations and 36% had not received a tuberculin skin test. Excluding injections, the overall compliance for preventive medicine requirements had a mean frequency of only 50.6% (95% CI = 45.36, 55.90).^ Several factors had a positive impact on compliance with logistical requirements. The most prominent was "receiving a medical intelligence briefing" from the USAF Public Health. After adjustment for mobility and age, individuals who underwent a briefing were 17.2 (95% CI = 4.37, 67.99) times more likely to have received an immunoglobulin shot and 4.2 (95% CI = 1.84, 9.45) times more likely to start their antimalarial prophylaxsis at the proper time. "Personnel on mobility" had the second strongest positive effect on medical readiness. When mobility and briefing were included in models, "personnel on mobility" were 2.6 (95% CI = 1.19, 5.53) times as likely to have DEET insecticide and 2.2 (95% CI = 1.16, 4.16) times as likely to have had a TB skin test.^ Five recommendations to improve the medical readiness of the USAF were outlined: upgrade base level logistical support, improve medical intelligence messages, include medical requirements on travel orders, place more personnel on mobility or only deploy personnel on mobility, and conduct research dedicated to capitalize on the powerful effect from predeployment briefings.^ Since this is the first study of its kind, more studies should be performed in different geographic theaters to assess medical readiness and establish acceptable compliance levels for the USAF. ^
Resumo:
Background. In the United States, the incidence of pancreatic cancer has increased; more than 37,000 new cases of pancreatic cancer were diagnosed in the year 2007. Overall, the five-year survival rate is about 5% and pancreatic cancer ranks the fourth leading cause of cancer-related mortality among men and women. Despite the observed progress in cancer diagnosis and treatment, pancreatic cancer remains an unresolved significant public health problem in the United States. Familial pancreatic cancer has been confirmed to be responsible for approximately 10% of pancreatic cancer cases. However, 90% are still without known inherited predisposition. Until now, the role of oral contraceptive pills (OCPs) and hormonal replacement therapy (HRT) among women with pancreatic cancer remain unclear. We examined the association of exogenous hormonal uses in US women with risk of pancreatic cancer. ^ Methods. This was an active hospital-based case-control study which is conducted at the department of gastrointestinal medical oncology in The University of Texas M.D. Anderson Cancer Center. Between January 2005 and December 2007, a total of 287 women with pathologically confirmed pancreatic cancer (cases) and 287 healthy women (controls) were included in this investigation. Both cases and controls were frequency matched by age and race. Information about the use of hormonal contraceptives and hormonal replacement therapy (HRT) preparations as well as information about several risk factors of pancreatic cancer were collected by personal interview. Univariate and multivariate analyses were performed in this study to analyze the data. ^ Results. We found a statistical significant protective effect for use of exogenous hormone preparations on pancreatic cancer development (adjusted odds ratio [AOR], 0.4; 95% confidence interval [CI], 0.2–0.8). In addition, a 40% reduction in pancreatic cancer risk was observed among women who ever used any of the contraceptive methods including oral contraceptive pills (AOR, 6; 95% CI, 0.4–0.9). ^ Conclusions. Consistent with previous studies, the use of exogenous hormone preparations including oral contraceptive pills may confers a protective effect for pancreatic cancer development. More studies are warranted to explore for the underlying mechanism of such protection.^
Resumo:
Background: Surgical site infections (SSIs) after abdominal surgeries account for approximately 26% of all reported SSIs. The Center for Disease Control and Prevention (CDC) defines 3 types of SSIs: superficial incisional, deep incisional, and organ/space. Preventing SSIs has become a national focus. This dissertation assesses several associations with the individual types of SSI in patients that have undergone colon surgery. ^ Methods: Data for this dissertation was obtained from the American College of Surgeons' National Surgical Quality Improvement Program (NSQIP); major colon surgeries were identified in the database that occurred between the time period of 2007 and 2009. NSQIP data includes more than 50 preoperative and 30 intraoperative factors; 40 collected postoperative occurrences are based on a follow-up period of 30 days from surgery. Initially, four individual logistic regressions were modeled to compare the associations between risk factors and each of the SSI groups: superficial, deep, organ/space and a composite of any single SSI. A second analysis used polytomous regression to assess simultaneously the associations between risk factors and the different types of SSIs, as well as, formally test the different effect estimates of 13 common risk factors for SSIs. The final analysis explored the association between venous thromboembolism (VTEs) and the different types of SSIs and risk factors. ^ Results: A total of 59,365 colon surgeries were included in the study. Overall, 13% of colon cases developed a single type of SSI; 8% of these were superficial SSIs, 1.4% was deep SSIs, and 3.8% were organ/space SSIs. The first article identifies the unique set of risk factors associated with each of the 4 SSI models. Distinct risk factors for superficial SSIs included factors, such as alcohol, chronic obstructive pulmonary disease, dyspnea and diabetes. Organ/space SSIs were uniquely associated with disseminated cancer, preoperative dialysis, preoperative radiation treatment, bleeding disorder and prior surgery. Risk factors that were significant in all models had different effect estimates. The second article assesses 13 common SSI risk factors simultaneously across the 3 different types of SSIs using polytomous regression. Then each risk factor was formally tested for the effect heterogeneity exhibited. If the test was significant the final model would allow for the effect estimations for that risk factor to vary across each type of SSI; if the test was not significant, the effect estimate would remain constant across the types of SSIs using the aggregate SSI value. The third article explored the relationship of venous thromboembolism (VTE) and the individual types of SSIs and risk factors. The overall incidence of VTEs after the 59,365 colon cases was 2.4%. All 3 types of SSIs and several risk factors were independently associated with the development of VTEs. ^ Conclusions: Risk factors associated with each type of SSI were different in patients that have undergone colon surgery. Each model had a unique cluster of risk factors. Several risk factors, including increased BMI, duration of surgery, wound class, and laparoscopic approach, were significant across all 4 models but no statistical inferences can be made about their different effect estimates. These results suggest that aggregating SSIs may misattribute and hide true associations with risk factors. Using polytomous regression to assess multiple risk factors with the multiple types of SSI, this study was able to identify several risk factors that had significant effect heterogeneity across the 3 types of SSI challenging the use of aggregate SSI outcomes. The third article recognizes the strong association between VTEs and the 3 types of SSIs. Clinicians understand the difference between superficial, deep and organ/space SSIs. Our results indicate that they should be considered individually in future studies.^
Resumo:
Contains only Graham's speech and letters concerning it, from John Adams, Thomas Jefferson, John Jay and others.
“Doing” Gender in Context: Household Bargaining and Risk of Divorce in Germany and the United States
Resumo:
Gender relations remain embedded in their sociopolitical context. Compared here using event-history analysis is how household divisions of paid and unpaid labor affect marital stability in the former West Germany, where policy reinforced male breadwinner families, and the United States, where policy remains silent regarding the private sphere. In Germany, any moves away from separate gendered spheres in terms of either wives' relative earnings or husbands' relative participation in housework increase the risk of divorce. In the United States, however, the more stable couples are those that adapt by displaying greater gender equity. These results highlight that policy shapes how gender gets done in the intimate sphere, and that reinforcement of a gendered division of labor may be detrimental to marital stability.
Resumo:
Previously developed models for predicting absolute risk of invasive epithelial ovarian cancer have included a limited number of risk factors and have had low discriminatory power (area under the receiver operating characteristic curve (AUC) < 0.60). Because of this, we developed and internally validated a relative risk prediction model that incorporates 17 established epidemiologic risk factors and 17 genome-wide significant single nucleotide polymorphisms (SNPs) using data from 11 case-control studies in the United States (5,793 cases; 9,512 controls) from the Ovarian Cancer Association Consortium (data accrued from 1992 to 2010). We developed a hierarchical logistic regression model for predicting case-control status that included imputation of missing data. We randomly divided the data into an 80% training sample and used the remaining 20% for model evaluation. The AUC for the full model was 0.664. A reduced model without SNPs performed similarly (AUC = 0.649). Both models performed better than a baseline model that included age and study site only (AUC = 0.563). The best predictive power was obtained in the full model among women younger than 50 years of age (AUC = 0.714); however, the addition of SNPs increased the AUC the most for women older than 50 years of age (AUC = 0.638 vs. 0.616). Adapting this improved model to estimate absolute risk and evaluating it in prospective data sets is warranted.
Resumo:
Recent epidemiologic studies have suggested that ultraviolet radiation (UV) may protect against non-Hodgkin lymphoma (NHL), but few, if any, have assessed multiple indicators of ambient and personal UV exposure. Using the US Radiologic Technologists study, we examined the association between NHL and self-reported time outdoors in summer, as well as average year-round and seasonal ambient exposures based on satellite estimates for different age periods, and sun susceptibility in participants who had responded to two questionnaires (1994–1998, 2003–2005) and who were cancer-free as of the earlier questionnaire. Using unconditional logistic regression, we estimated the odds ratio (OR) and 95% confidence intervals for 64,103 participants with 137 NHL cases. Self-reported time outdoors in summer was unrelated to risk. Lower risk was somewhat related to higher average year-round and winter ambient exposure for the period closest in time, and prior to, diagnosis (ages 20–39). Relative to 1.0 for the lowest quartile of average year-round ambient UV, the estimated OR for successively higher quartiles was 0.68 (0.42–1.10); 0.82 (0.52–1.29); and 0.64 (0.40–1.03), p-trend = 0.06), for this age period. The lower NHL risk associated with higher year-round average and winter ambient UV provides modest additional support for a protective relationship between UV and NHL.
Resumo:
The rural two-lane highway in the southeastern United States is frequently associated with a disproportionate number of serious and fatal crashes and as such remains a focus of considerable safety research. The Georgia Department of Transportation spearheaded a regional fatal crash analysis to identify various safety performances of two-lane rural highways and to offer guidance for identifying suitable countermeasures with which to mitigate fatal crashes. The fatal crash data used in this study were compiled from Alabama, Georgia, Mississippi, and South Carolina. The database, developed for an earlier study, included 557 randomly selected fatal crashes from 1997 or 1998 or both (this varied by state). Each participating state identified the candidate crashes and performed physical or video site visits to construct crash databases with enhance site-specific information. Motivated by the hypothesis that single- and multiple-vehicle crashes arise from fundamentally different circumstances, the research team applied binary logit models to predict the probability that a fatal crash is a single-vehicle run-off-road fatal crash given roadway design characteristics, roadside environment features, and traffic conditions proximal to the crash site. A wide variety of factors appears to influence or be associated with single-vehicle fatal crashes. In a model transferability assessment, the authors determined that lane width, horizontal curvature, and ambient lighting are the only three significant variables that are consistent for single-vehicle run-off-road crashes for all study locations.