983 resultados para Risk-taking (Psychology) in adolescence -- United States.
Resumo:
Background The risk factors and co-factors for sporadic childhood BL are unknown. We investigated demographic and age-specific characteristics of childhood BL (0–14 years) in the U.S. Procedure BL age-standardized incidence rates (2000 U.S. standard population), were calculated using data obtained from 12 registries in the NCI’s Surveillance, Epidemiology, and End Results program for cases diagnosed from 1992 through 2005. Incidence rate ratios and 95% confidence intervals (95% CI) were calculated by gender, age-group, race, ethnicity, calendar-year period, and registry. Results Of 296 cases identified, 56% were diagnosed in lymph nodes, 21% in abdominal organs, not including retroperitoneal lymph nodes, 14% were Burkitt cell leukemia, and 9% on face/head structures. The male-to-female case ratio was highest for facial/head tumors (25:1) and lowest for Burkitt cell leukemia (1.6:1). BL incidence rate was 2.5 (95% CI 2.3–2.8) cases per million person-years and was higher among boys than girls (3.9 vs. 1.1, p<0.001) and higher among Whites and Asians/Pacific Islanders than among Blacks (2.8 and 2.9 vs.1.2, respectively, p<0.001). By ethnicity, BL incidence was higher among non-Hispanic Whites than Hispanic Whites (3.2 vs. 2.0, p=0.002). Age-specific incidence rate for BL peaked by age 3–5 years (3.4 cases per million), then stabilized or declined with increasing age, but it did not vary with calendar-year or registry area. Conclusions Our results indicate that early childhood exposures, male-sex, and White race may be risk factors for sporadic childhood BL in the United States. Keywords: epidemiology, pediatric cancer, non-Hodgkin lymphoma, HIV/AIDS
Resumo:
Extreme cold and heat waves, characterised by a number of cold or hot days in succession, place a strain on people’s cardiovascular and respiratory systems. The increase in deaths due to these waves may be greater than that predicted by extreme temperatures alone. We examined cold and heat waves in 99 US cities for 14 years (1987–2000) and investigated how the risk of death depended on the temperature threshold used to define a wave, and a wave’s timing, duration and intensity. We defined cold and heat waves using temperatures above and below cold and heat thresholds for two or more days. We tried five cold thresholds using the first to fifth percentiles of temperature, and five heat thresholds using the ninety-fifth to ninety-ninth percentiles. The extra wave effects were estimated using a two-stage model to ensure that their effects were estimated after removing the general effects of temperature. The increases in deaths associated with cold waves were generally small and not statistically significant, and there was even evidence of a decreased risk during the coldest waves. Heat waves generally increased the risk of death, particularly for the hottest heat threshold. Cold waves of a colder intensity or longer duration were not more dangerous. Cold waves earlier in the cool season were more dangerous, as were heat waves earlier in the warm season. In general there was no increased risk of death during cold waves above the known increased risk associated with cold temperatures. Cold or heat waves earlier in the cool or warm season may be more dangerous because of a build up in the susceptible pool or a lack of preparedness for cold or hot temperatures.
Resumo:
Background Individual exposure to ultraviolet radiation (UVR) is challenging to measure, particularly for diseases with substantial latency periods between first exposure and diagnosis of outcome, such as cancer. To guide the choice of surrogates for long-term UVR exposure in epidemiologic studies, we assessed how well stable sun-related individual characteristics and environmental/meteorological factors predicted daily personal UVR exposure measurements. Methods We evaluated 123 United States Radiologic Technologists subjects who wore personal UVR dosimeters for 8 hours daily for up to 7 days (N = 837 days). Potential predictors of personal UVR derived from a self-administered questionnaire, and public databases that provided daily estimates of ambient UVR and weather conditions. Factors potentially related to personal UVR exposure were tested individually and in a model including all significant variables. Results The strongest predictors of daily personal UVR exposure in the full model were ambient UVR, latitude, daily rainfall, and skin reaction to prolonged sunlight (R2 = 0.30). In a model containing only environmental and meteorological variables, ambient UVR, latitude, and daily rainfall were the strongest predictors of daily personal UVR exposure (R2 = 0.25). Conclusions In the absence of feasible measures of individual longitudinal sun exposure history, stable personal characteristics, ambient UVR, and weather parameters may help estimate long-term personal UVR exposure.
Resumo:
Health policy interventions provide powerful tools for addressing health disparities. The Latino community is one of the fastest growing communities in the United States yet is largely underrepresented in government and advocacy efforts. This study includes 42 Latino adults (M age 5 45 years) who participated in focus group discussions and completed a brief questionnaire assessing their experiences with political health advocacy. Qualitative analyses revealed participants considered cancer a concern for the Latino community, but there was a lack of familiarity with political advocacy and its role in cancer control. Participants identified structural, practical, cultural, and contextual barriers to engaging in political health advocacy. This article presents a summary of the findings that suggest alternative ways to engage Latinos in cancer control advocacy.
Variation in use of surveillance colonoscopy among colorectal cancer survivors in the United States.
Resumo:
BACKGROUND: Clinical practice guidelines recommend colonoscopies at regular intervals for colorectal cancer (CRC) survivors. Using data from a large, multi-regional, population-based cohort, we describe the rate of surveillance colonoscopy and its association with geographic, sociodemographic, clinical, and health services characteristics. METHODS: We studied CRC survivors enrolled in the Cancer Care Outcomes Research and Surveillance (CanCORS) study. Eligible survivors were diagnosed between 2003 and 2005, had curative surgery for CRC, and were alive without recurrences 14 months after surgery with curative intent. Data came from patient interviews and medical record abstraction. We used a multivariate logit model to identify predictors of colonoscopy use. RESULTS: Despite guidelines recommending surveillance, only 49% of the 1423 eligible survivors received a colonoscopy within 14 months after surgery. We observed large regional differences (38% to 57%) across regions. Survivors who received screening colonoscopy were more likely to: have colon cancer than rectal cancer (OR = 1.41, 95% CI: 1.05-1.90); have visited a primary care physician (OR = 1.44, 95% CI: 1.14-1.82); and received adjuvant chemotherapy (OR = 1.75, 95% CI: 1.27-2.41). Compared to survivors with no comorbidities, survivors with moderate or severe comorbidities were less likely to receive surveillance colonoscopy (OR = 0.69, 95% CI: 0.49-0.98 and OR = 0.44, 95% CI: 0.29-0.66, respectively). CONCLUSIONS: Despite guidelines, more than half of CRC survivors did not receive surveillance colonoscopy within 14 months of surgery, with substantial variation by site of care. The association of primary care visits and adjuvant chemotherapy use suggests that access to care following surgery affects cancer surveillance.
Resumo:
The health of clergy is important, and clergy may find health programming tailored to them more effective. Little is known about existing clergy health programs. We contacted Protestant denominational headquarters and searched academic databases and the Internet. We identified 56 clergy health programs and categorized them into prevention and personal enrichment; counseling; marriage and family enrichment; peer support; congregational health; congregational effectiveness; denominational enrichment; insurance/strategic pension plans; and referral-based programs. Only 13 of the programs engaged in outcomes evaluation. Using the Socioecological Framework, we found that many programs support individual-level and institutional-level changes, but few programs support congregational-level changes. Outcome evaluation strategies and a central repository for information on clergy health programs are needed. © 2011 Springer Science+Business Media, LLC.
Resumo:
OBJECTIVE: To ascertain the degree of variation, by state of hospitalization, in outcomes associated with traumatic brain injury (TBI) in a pediatric population. DESIGN: A retrospective cohort study of pediatric patients admitted to a hospital with a TBI. SETTING: Hospitals from states in the United States that voluntarily participate in the Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project. PARTICIPANTS: Pediatric (age ≤ 19 y) patients hospitalized for TBI (N=71,476) in the United States during 2001, 2004, 2007, and 2010. INTERVENTIONS: None. MAIN OUTCOME MEASURES: Primary outcome was proportion of patients discharged to rehabilitation after an acute care hospitalization among alive discharges. The secondary outcome was inpatient mortality. RESULTS: The relative risk of discharge to inpatient rehabilitation varied by as much as 3-fold among the states, and the relative risk of inpatient mortality varied by as much as nearly 2-fold. In the United States, approximately 1981 patients could be discharged to inpatient rehabilitation care if the observed variation in outcomes was eliminated. CONCLUSIONS: There was significant variation between states in both rehabilitation discharge and inpatient mortality after adjusting for variables known to affect each outcome. Future efforts should be focused on identifying the cause of this state-to-state variation, its relationship to patient outcome, and standardizing treatment across the United States.
Resumo:
The ability to monitor and evaluate the consequences of ongoing behaviors and coordinate behavioral adjustments seems to rely on networks including the anterior cingulate cortex (ACC) and phasic changes in dopamine activity. Activity (and presumably functional maturation) of the ACC may be indirectly measured using the error-related negativity (ERN), an event-related potential (ERP) component that is hypothesized to reflect activity of the automatic response monitoring system. To date, no studies have examined the measurement reliability of the ERN as a trait-like measure of response monitoring, its development in mid- and late- adolescence as well as its relation to risk-taking and empathic ability, two traits linked to dopaminergic and ACC activity. Utilizing a large sample of 15- and 18-year-old males, the present study examined the test-retest reliability of the ERN, age-related changes in the ERN and other components of the ERP associated with error monitoring (the Pe and CRN), and the relations of the error-related ERP components to personality traits of risk propensity and empathy. Results indicated good test-retest reliability of the ERN providing important validation of the ERN as a stable and possibly trait-like electrophysiological correlate of performance monitoring. Ofthe three components, only the ERN was of greater amplitude for the older adolescents suggesting that its ACC network is functionally late to mature, due to either structural or neurochemical changes with age. Finally, the ERN was smaller for those with high risk propensity and low empathy, while other components associated with error monitoring were not, which suggests that poor ACe function may be associated with the desire to engage in risky behaviors and the ERN may be influenced by the extent of individuals' concern with the outcome of events.
Resumo:
This study examined adolescents' reported sexual and dietary health-risk behaviours and perceptions. Specifically, this study analyzed the data of 600 students (300 male~ 300 female) in grades 9, I 1, and OAC (mean, standard deviation). The mean age of the students in the sample is 16 with a standard deviation of 1.6. The study was a secondary analysis ofthe first-year data of a 3-year longitudinal study conducted by Youth Lifestyle Choices-Community University Research Alliance (YLC-CURA) on adolescents. To explore sexuality and dietary health, this study purposefully selected sections of the survey that represented sex and dieting behaviours of adolescents. Separate gender and age data analyses revealed different patterns among the variables. Specifically., findings revealed that adolescents who engaged in recent sexual activities were more likely to have a relatively more positive body image perception and were relatively more likely to engage in disordered eating. Across both genders and 3 age levels, adolescents reported that despite their unhealthy dietary habits they felt that dieting was not a high-risk behaviour. Results were discussed in terms of educational implication for sexual health programs.
Resumo:
BACKGROUND: Peripheral artery disease (PAD) is common and imposes a high risk of major systemic and limb ischemic events. The REduction of Atherothrombosis for Continued Health (REACH) Registry is an international prospective registry of patients at risk of atherothrombosis caused by established arterial disease or the presence of 3 atherothrombotic risk factors. METHODS AND RESULTS: We compared the 2-year rates of vascular-related hospitalizations and associated costs in US patients with established PAD across patient subgroups. Symptomatic PAD at enrollment was identified on the basis of current intermittent claudication with an ankle-brachial index (ABI) <0.90 or a history of lower-limb revascularization or amputation. Asymptomatic PAD was diagnosed on the basis of an enrollment ABI <0.90 in the absence of symptoms. Overall, 25 763 of the total 68 236-patient REACH cohort were enrolled from US sites; 2396 (9.3%) had symptomatic and 213 (0.8%) had asymptomatic PAD at baseline. One- and cumulative 2-year follow-up data were available for 2137 (82%) and 1677 (64%) of US REACH patients with either symptomatic or asymptomatic PAD, respectively. At 2 years, mean cumulative hospitalization costs, per patient, were $7445, $7000, $10 430, and $11 693 for patients with asymptomatic PAD, a history of claudication, lower-limb amputation, and revascularization, respectively (P=0.007). A history of peripheral intervention (lower-limb revascularization or amputation) was associated with higher rates of subsequent procedures at both 1 and 2 years. CONCLUSIONS: The economic burden of PAD is high. Recurring hospitalizations and repeat revascularization procedures suggest that neither patients, physicians, nor healthcare systems should assume that a first admission for a lower-extremity PAD procedure serves as a permanent resolution of this costly and debilitating condition.
Resumo:
Objectives Our objective in this study was to compare assistance received by individuals in the United States and Sweden with characteristics associated with low, moderate, or high 1-year placement risk in the United States. Methods We used longitudinal nationally representative data from 4,579 participants aged 75 years and older in the 1992 and 1993 waves of the Medicare Current Beneficiary Survey (MCBS) and cross-sectional data from 1,379 individuals aged 75 years and older in the Swedish Aging at Home (AH) national survey for comparative purposes. We developed a logistic regression equation using U.S. data to identify individuals with 3 levels (low, moderate, or high) of predicted 1-year institutional placement risk. Groups with the same characteristics were identified in the Swedish sample and compared on formal and informal assistance received. Results Formal service utilization was higher in Swedish sample, whereas informal service use is lower overall. Individuals with characteristics associated with high placement risk received more formal and less informal assistance in Sweden relative to the United States. Discussion Differences suggest formal services supplement informal support in the United States and that formal and informal services are complementary in Sweden.
Resumo:
The dissertation titled "Driver Safety in Far-side and Far-oblique Crashes" presents a novel approach to assessing vehicle cockpit safety by integrating Human Factors and Applied Mechanics. The methodology of this approach is aimed at improving safety in compact mobile workspaces such as patrol vehicle cockpits. A statistical analysis performed using Michigan state's traffic crash data to assess various contributing factors that affect the risk of severe driver injuries showed that the risk was greater for unrestrained drivers (OR=3.38, p<0.0001) and for incidents involving front and far-side crashes without seatbelts (OR=8.0 and 23.0 respectively, p<0.005). Statistics also showed that near-side and far-side crashes pose similar threat to driver injury severity. A Human Factor survey was conducted to assess various Human-Machine/Human-Computer Interaction aspects in patrol vehicle cockpits. Results showed that tasks requiring manual operation, especially the usage of laptop, would require more attention and potentially cause more distraction. A vehicle survey conducted to evaluate ergonomics-related issues revealed that some of the equipment was in airbag deployment zones. In addition, experiments were conducted to assess the effects on driver distraction caused by changing the position of in-car accessories. A driving simulator study was conducted to mimic HMI/HCI in a patrol vehicle cockpit (20 subjects, average driving experience = 5.35 years, s.d. = 1.8). It was found that the mounting locations of manual tasks did not result in a significant change in response times. Visual displays resulted in response times less than 1.5sec. It can also be concluded that the manual task was equally distracting regardless of mounting positions (average response time was 15 secs). Average speeds and lane deviations did not show any significant results. Data from 13 full-scale sled tests conducted to simulate far-side impacts at 70 PDOF and 40 PDOF was used to analyze head injuries and HIC/AIS values. It was found that accelerations generated by the vehicle deceleration alone were high enough to cause AIS 3 - AIS 6 injuries. Pretensioners could mitigated injuries only in 40 PDOF (oblique) impacts but are useless in 70 PDOF impacts. Seat belts were ineffective in protecting the driver's head from injuries. Head would come in contact with the laptop during a far-oblique (40 PDOF) crash and far-side door for an angle-type crash (70 PDOF). Finite Element analysis head-laptop impact interaction showed that the contact velocity was the most crucial factor in causing a severe (and potentially fatal) head injury. Results indicate that no equipment may be mounted in driver trajectory envelopes. A very narrow band of space is left in patrol vehicles for installation of manual-task equipment to be both safe and ergonomic. In case of a contact, the material stiffness and damping properties play a very significant role in determining the injury outcome. Future work may be done on improving the interiors' material properties to better absorb and dissipate kinetic energy of the head. The design of seat belts and pretensioners may also be seen as an essential aspect to be further improved.
Resumo:
This research examines prevalence of alcohol and illicit substance use in the United States and Mexico and associated socio-demographic characteristics. The sources of data for this study are public domain data from the U.S. National Household Survey of Drug Abuse, 1988 (n = 8814), and the Mexican National Survey of Addictions, 1988 (n = 12,579). In addition, this study discusses methodologic issues in cross-cultural and cross-national comparison of behavioral and epidemiologic data from population-based samples. The extent to which patterns of substance abuse vary among subgroups of the U.S. and Mexican populations is assessed, as well as the comparability and equivalence of measures of alcohol and drug use in these national samples.^ The prevalence of alcohol use was somewhat similar in the two countries for all three measures of use: lifetime, past year and past year heavy use, (85.0%, 68.1%, 39.6% and 72.6%, 47.7% and 45.8% for the U.S. and Mexico respectively). The use of illegal substances varied widely between countries, with U.S. respondents reporting significantly higher levels of use than their Mexican counterparts. For example, reported use of any illicit substance in lifetime and past year was 34.2%, 11.6 for the U.S., and 3.3% and 0.6% for Mexico. Despite these differences in prevalence, two demographic characteristics, gender and age, were important correlates of use in both countries. Men in both countries were more likely to report use of alcohol and illicit substances than women. Generally speaking, a greater proportion of respondents in both countries 18 years of age or older reported use of alcohol for all three measures than younger respondents; and a greater proportion of respondents between the ages of 18 and 34 years reported use of illicit substances during lifetime and past year than any other age group.^ Additional substantive research investigating population-based samples and at-risk subgroups is needed to understand the underlying mechanisms of these associations. Further development of cross-culturally meaningful survey methods is warranted to validate comparisons of substance use across countries and societies. ^
Resumo:
Cutaneous malignant melanoma (CMM) is the cancer of the melanocytes, the cells that produce the pigment melanin, and is an aggressive skin cancer that is most prevalent in the white population. Although most cases of malignant melanoma are white, black and other non-white populations also develop this disease. However, the etiologic factors involved in the development of melanoma in these lower-risk populations are not well known. Generally, survival rates of malignant melanoma have been found to be lower in blacks than for whites with similar stage of disease at diagnosis. ^ This study presents an analysis of the differences in survival between black and white cases with malignant melanoma of the skin as the only or first primary cancer, found in the National Cancer Institute Surveillance, Epidemiology and End Results (SEER) cancer registry from 1973 to 1997. A total of 54,193 cases of CMM were diagnosed in black and white patients between 1973 and 1997. Black patients tended to be older, with a mean age of 64.46 years, compared to 53.14 years for white patients. Eighty-nine percent of patients were diagnosed with CMM as the only cancer. (Abstract shortened by UMI.)^