22 resultados para United States. Air Force Officers Training Corps.
em Duke University
Resumo:
Chimpanzees are native only to the jungles of equatorial Africa, but for the last hundred years, they have also lived in captivity in the United States, most commonly in biomedical research laboratories, but also at Air Force bases for experiments for the space program, at accredited and unaccredited zoos, at circuses, as performers in Hollywood and even in private homes and backyards as pets. But that has been gradually evolving over the last few decades, as more and more chimpanzees move to newly-established chimpanzee sanctuaries. That transition was already underway even before the announcement by the National Institutes of Health (NIH) last year that it will retire all of its remaining chimpanzees from labs to sanctuaries. By thoroughly examining the evolution of these sanctuaries leading up to that seminal decision, along with the many challenges they face, including money, medical care, conflicting philosophies on the treatment of animals and the pitfalls that have led other sanctuaries to the brink of ruin, we can take away a better understanding of why chimpanzee sanctuaries are needed and why caretakers of other animal species are now looking to the chimpanzee sanctuary movement as a model to show how animals can be cared for in retirement.
Resumo:
Previously developed models for predicting absolute risk of invasive epithelial ovarian cancer have included a limited number of risk factors and have had low discriminatory power (area under the receiver operating characteristic curve (AUC) < 0.60). Because of this, we developed and internally validated a relative risk prediction model that incorporates 17 established epidemiologic risk factors and 17 genome-wide significant single nucleotide polymorphisms (SNPs) using data from 11 case-control studies in the United States (5,793 cases; 9,512 controls) from the Ovarian Cancer Association Consortium (data accrued from 1992 to 2010). We developed a hierarchical logistic regression model for predicting case-control status that included imputation of missing data. We randomly divided the data into an 80% training sample and used the remaining 20% for model evaluation. The AUC for the full model was 0.664. A reduced model without SNPs performed similarly (AUC = 0.649). Both models performed better than a baseline model that included age and study site only (AUC = 0.563). The best predictive power was obtained in the full model among women younger than 50 years of age (AUC = 0.714); however, the addition of SNPs increased the AUC the most for women older than 50 years of age (AUC = 0.638 vs. 0.616). Adapting this improved model to estimate absolute risk and evaluating it in prospective data sets is warranted.
Resumo:
Knowing one's HIV status is particularly important in the setting of recent tuberculosis (TB) exposure. Blood tests for assessment of tuberculosis infection, such as the QuantiFERON Gold in-tube test (QFT; Cellestis Limited, Carnegie, Victoria, Australia), offer the possibility of simultaneous screening for TB and HIV with a single blood draw. We performed a cross-sectional analysis of all contacts to a highly infectious TB case in a large meatpacking factory. Twenty-two percent were foreign-born and 73% were black. Contacts were tested with both tuberculin skin testing (TST) and QFT. HIV testing was offered on an opt-out basis. Persons with TST >or=10 mm, positive QFT, and/or positive HIV test were offered latent TB treatment. Three hundred twenty-six contacts were screened: TST results were available for 266 people and an additional 24 reported a prior positive TST for a total of 290 persons with any TST result (89.0%). Adequate QFT specimens were obtained for 312 (95.7%) of persons. Thirty-two persons had QFT results but did not return for TST reading. Twenty-two percent met the criteria for latent TB infection. Eighty-eight percent accepted HIV testing. Two (0.7%) were HIV seropositive; both individuals were already aware of their HIV status, but one had stopped care a year previously. None of the HIV-seropositive persons had latent TB, but all were offered latent TB treatment per standard guidelines. This demonstrates that opt-out HIV testing combined with QFT in a large TB contact investigation was feasible and useful. HIV testing was also widely accepted. Pairing QFT with opt-out HIV testing should be strongly considered when possible.
Resumo:
The best wind sites in the United States are often located far from electricity demand centers and lack transmission access. Local sites that have lower quality wind resources but do not require as much power transmission capacity are an alternative to distant wind resources. In this paper, we explore the trade-offs between developing new wind generation at local sites and installing wind farms at remote sites. We first examine the general relationship between the high capital costs required for local wind development and the relatively lower capital costs required to install a wind farm capable of generating the same electrical output at a remote site,with the results representing the maximum amount an investor should be willing to pay for transmission access. We suggest that this analysis can be used as a first step in comparing potential wind resources to meet a state renewable portfolio standard (RPS). To illustrate, we compare the cost of local wind (∼50 km from the load) to the cost of distant wind requiring new transmission (∼550-750 km from the load) to meet the Illinois RPS. We find that local, lower capacity factor wind sites are the lowest cost option for meeting the Illinois RPS if new long distance transmission is required to access distant, higher capacity factor wind resources. If higher capacity wind sites can be connected to the existing grid at minimal cost, in many cases they will have lower costs.
Resumo:
Atomic force microscopy, which is normally used for DNA imaging to gain qualitative results, can also be used for quantitative DNA research, at a single-molecular level. Here, we evaluate the performance of AFM imaging specifically for quantifying supercoiled and relaxed plasmid DNA fractions within a mixture, and compare the results with the bulk material analysis method, gel electrophoresis. The advantages and shortcomings of both methods are discussed in detail. Gel electrophoresis is a quick and well-established quantification method. However, it requires a large amount of DNA, and needs to be carefully calibrated for even slightly different experimental conditions for accurate quantification. AFM imaging is accurate, in that single DNA molecules in different conformations can be seen and counted. When used carefully with necessary correction, both methods provide consistent results. Thus, AFM imaging can be used for DNA quantification, as an alternative to gel electrophoresis.
Resumo:
The variability of summer precipitation in the southeastern United States is examined in this study using 60-yr (1948-2007) rainfall data. The Southeast summer rainfalls exhibited higher interannual variability with more intense summer droughts and anomalous wetness in the recent 30 years (1978-2007) than in the prior 30 years (1948-77). Such intensification of summer rainfall variability was consistent with a decrease of light (0.1-1 mm day-1) and medium (1-10 mm day-1) rainfall events during extremely dry summers and an increase of heavy (.10 mm day-1) rainfall events in extremely wet summers. Changes in rainfall variability were also accompanied by a southward shift of the region of maximum zonal wind variability at the jet stream level in the latter period. The covariability between the Southeast summer precipitation and sea surface temperatures (SSTs) is also analyzed using the singular value decomposition (SVD) method. It is shown that the increase of Southeast summer precipitation variability is primarily associated with a higher SST variability across the equatorial Atlantic and also SST warming in the Atlantic. © 2010 American Meteorological Society.
Variation in use of surveillance colonoscopy among colorectal cancer survivors in the United States.
Resumo:
BACKGROUND: Clinical practice guidelines recommend colonoscopies at regular intervals for colorectal cancer (CRC) survivors. Using data from a large, multi-regional, population-based cohort, we describe the rate of surveillance colonoscopy and its association with geographic, sociodemographic, clinical, and health services characteristics. METHODS: We studied CRC survivors enrolled in the Cancer Care Outcomes Research and Surveillance (CanCORS) study. Eligible survivors were diagnosed between 2003 and 2005, had curative surgery for CRC, and were alive without recurrences 14 months after surgery with curative intent. Data came from patient interviews and medical record abstraction. We used a multivariate logit model to identify predictors of colonoscopy use. RESULTS: Despite guidelines recommending surveillance, only 49% of the 1423 eligible survivors received a colonoscopy within 14 months after surgery. We observed large regional differences (38% to 57%) across regions. Survivors who received screening colonoscopy were more likely to: have colon cancer than rectal cancer (OR = 1.41, 95% CI: 1.05-1.90); have visited a primary care physician (OR = 1.44, 95% CI: 1.14-1.82); and received adjuvant chemotherapy (OR = 1.75, 95% CI: 1.27-2.41). Compared to survivors with no comorbidities, survivors with moderate or severe comorbidities were less likely to receive surveillance colonoscopy (OR = 0.69, 95% CI: 0.49-0.98 and OR = 0.44, 95% CI: 0.29-0.66, respectively). CONCLUSIONS: Despite guidelines, more than half of CRC survivors did not receive surveillance colonoscopy within 14 months of surgery, with substantial variation by site of care. The association of primary care visits and adjuvant chemotherapy use suggests that access to care following surgery affects cancer surveillance.
Resumo:
Cryptococcus gattii causes life-threatening disease in otherwise healthy hosts and to a lesser extent in immunocompromised hosts. The highest incidence for this disease is on Vancouver Island, Canada, where an outbreak is expanding into neighboring regions including mainland British Columbia and the United States. This outbreak is caused predominantly by C. gattii molecular type VGII, specifically VGIIa/major. In addition, a novel genotype, VGIIc, has emerged in Oregon and is now a major source of illness in the region. Through molecular epidemiology and population analysis of MLST and VNTR markers, we show that the VGIIc group is clonal and hypothesize it arose recently. The VGIIa/IIc outbreak lineages are sexually fertile and studies support ongoing recombination in the global VGII population. This illustrates two hallmarks of emerging outbreaks: high clonality and the emergence of novel genotypes via recombination. In macrophage and murine infections, the novel VGIIc genotype and VGIIa/major isolates from the United States are highly virulent compared to similar non-outbreak VGIIa/major-related isolates. Combined MLST-VNTR analysis distinguishes clonal expansion of the VGIIa/major outbreak genotype from related but distinguishable less-virulent genotypes isolated from other geographic regions. Our evidence documents emerging hypervirulent genotypes in the United States that may expand further and provides insight into the possible molecular and geographic origins of the outbreak.
Resumo:
The health of clergy is important, and clergy may find health programming tailored to them more effective. Little is known about existing clergy health programs. We contacted Protestant denominational headquarters and searched academic databases and the Internet. We identified 56 clergy health programs and categorized them into prevention and personal enrichment; counseling; marriage and family enrichment; peer support; congregational health; congregational effectiveness; denominational enrichment; insurance/strategic pension plans; and referral-based programs. Only 13 of the programs engaged in outcomes evaluation. Using the Socioecological Framework, we found that many programs support individual-level and institutional-level changes, but few programs support congregational-level changes. Outcome evaluation strategies and a central repository for information on clergy health programs are needed. © 2011 Springer Science+Business Media, LLC.
Resumo:
PURPOSE/BACKGROUND: Dynamic balance is an important component of motor skill development. Poor dynamic balance has previously been associated with sport related injury. However, the vast majority of dynamic balance studies as they relate to sport injury have occurred in developed North American or European countries. Thus, the purpose of this study was to compare dynamic balance in adolescent male soccer players from Rwanda to a matched group from the United States. METHODS: Twenty-six adolescent male soccer players from Rwanda and 26 age- and gender-matched control subjects from the United States were screened using the Lower Quarter Y Balance Test during their pre-participation physical. Reach asymmetry (cm) between limbs was examined for all reach directions. In addition, reach distance in each direction (normalized to limb length, %LL) and the composite reach score (also normalized to %LL) were examined. Dependent samples t-tests were performed with significant differences identified at p<0.05. RESULTS: Twenty-six male soccer players from Rwanda (R) were matched to twenty-six male soccer players from the United States (US). The Rwandan soccer players performed better in the anterior (R: 83.9 ± 3.2 %LL; US: 76.5 ± 6.6 %LL, p<0.01), posterolateral (R: 114.4 ± 8.3 %LL ; US: 106.5 ± 8.2 %LL, p<0.01) and composite (R: 105.6 ± 1.3 %LL; US: 97.8 ± 6.2 %LL, p<0.01) reach scores. No significant differences between groups were observed for reach asymmetry. CONCLUSIONS: Adolescent soccer players from Rwanda exhibit superior performance on a standardized dynamic balance test as comparison to similar athletes from the United States. The examination of movement abilities of athletes from countries of various origins may allow for a greater understanding of the range of true normative values for dynamic balance. LEVELS OF EVIDENCE: 3b.
Resumo:
Given the increases in spatial resolution and other improvements in climate modeling capabilities over the last decade since the CMIP3 simulations were completed, CMIP5 provides a unique opportunity to assess scientific understanding of climate variability and change over a range of historical and future conditions. With participation from over 20 modeling groups and more than 40 global models, CMIP5 represents the latest and most ambitious coordinated international climate model intercomparison exercise to date. Observations dating back to 1900 show that the temperatures in the twenty-first century have the largest spatial extent of record breaking and much above normal mean monthly maximum and minimum temperatures. The 20-yr return value of the annual maximum or minimum daily temperature is one measure of changes in rare temperature extremes.
Resumo:
This study investigates the changes of the North Atlantic subtropical high (NASH) and its impact on summer precipitation over the southeastern (SE) United States using the 850-hPa geopotential height field in the National Centers forEnvironmental Prediction (NCEP) reanalysis, the 40-yr European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA-40), long-term rainfall data, and Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) model simulations during the past six decades (1948-2007). The results show that the NASH in the last 30 yr has become more intense, and its western ridge has displaced westward with an enhanced meridional movement compared to the previous 30 yr. When the NASH moved closer to the continental United States in the three most recent decades, the effect of the NASH on the interannual variation of SE U.S. precipitation is enhanced through the ridge's north-south movement. The study's attribution analysis suggested that the changes of the NASH are mainly due to anthropogenic warming. In the twenty-first century with an increase of the atmospheric CO2 concentration, the center of the NASH would be intensified and the western ridge of the NASH would shift farther westward. These changes would increase the likelihood of both strong anomalously wet and dry summers over the SEUnited States in the future, as suggested by the IPCC AR4 models. © 2011 American Meteorological Society.
Resumo:
© 2014, Springer-Verlag Berlin Heidelberg.This study assesses the skill of advanced regional climate models (RCMs) in simulating southeastern United States (SE US) summer precipitation and explores the physical mechanisms responsible for the simulation skill at a process level. Analysis of the RCM output for the North American Regional Climate Change Assessment Program indicates that the RCM simulations of summer precipitation show the largest biases and a remarkable spread over the SE US compared to other regions in the contiguous US. The causes of such a spread are investigated by performing simulations using the Weather Research and Forecasting (WRF) model, a next-generation RCM developed by the US National Center for Atmospheric Research. The results show that the simulated biases in SE US summer precipitation are due mainly to the misrepresentation of the modeled North Atlantic subtropical high (NASH) western ridge. In the WRF simulations, the NASH western ridge shifts 7° northwestward when compared to that in the reanalysis ensemble, leading to a dry bias in the simulated summer precipitation according to the relationship between the NASH western ridge and summer precipitation over the southeast. Experiments utilizing the four dimensional data assimilation technique further suggest that the improved representation of the circulation patterns (i.e., wind fields) associated with the NASH western ridge substantially reduces the bias in the simulated SE US summer precipitation. Our analysis of circulation dynamics indicates that the NASH western ridge in the WRF simulations is significantly influenced by the simulated planetary boundary layer (PBL) processes over the Gulf of Mexico. Specifically, a decrease (increase) in the simulated PBL height tends to stabilize (destabilize) the lower troposphere over the Gulf of Mexico, and thus inhibits (favors) the onset and/or development of convection. Such changes in tropical convection induce a tropical–extratropical teleconnection pattern, which modulates the circulation along the NASH western ridge in the WRF simulations and contributes to the modeled precipitation biases over the SE US. In conclusion, our study demonstrates that the NASH western ridge is an important factor responsible for the RCM skill in simulating SE US summer precipitation. Furthermore, the improvements in the PBL parameterizations for the Gulf of Mexico might help advance RCM skill in representing the NASH western ridge circulation and summer precipitation over the SE US.
Resumo:
UNLABELLED: BACKGROUND: Primary care, an essential determinant of health system equity, efficiency, and effectiveness, is threatened by inadequate supply and distribution of the provider workforce. The Veterans Health Administration (VHA) has been a frontrunner in the use of nurse practitioners (NPs) and physician assistants (PAs). Evaluation of the roles and impact of NPs and PAs in the VHA is critical to ensuring optimal care for veterans and may inform best practices for use of PAs and NPs in other settings around the world. The purpose of this study was to characterize the use of NPs and PAs in VHA primary care and to examine whether their patients and patient care activities were, on average, less medically complex than those of physicians. METHODS: This is a retrospective cross-sectional analysis of administrative data from VHA primary care encounters between 2005 and 2010. Patient and patient encounter characteristics were compared across provider types (PA, NP, and physician). RESULTS: NPs and PAs attend about 30% of all VHA primary care encounters. NPs, PAs, and physicians fill similar roles in VHA primary care, but patients of PAs and NPs are slightly less complex than those of physicians, and PAs attend a higher proportion of visits for the purpose of determining eligibility for benefits. CONCLUSIONS: This study demonstrates that a highly successful nationwide primary care system relies on NPs and PAs to provide over one quarter of primary care visits, and that these visits are similar to those of physicians with regard to patient and encounter characteristics. These findings can inform health workforce solutions to physician shortages in the USA and around the world. Future research should compare the quality and costs associated with various combinations of providers and allocations of patient care work, and should elucidate the approaches that maximize quality and efficiency.
Resumo:
Approximately 45,000 individuals are hospitalized annually for burn treatment. Rehabilitation after hospitalization can offer a significant improvement in functional outcomes. Very little is known nationally about rehabilitation for burns, and practices may vary substantially depending on the region based on observed Medicare post-hospitalization spending amounts. This study was designed to measure variation in rehabilitation utilization by state of hospitalization for patients hospitalized with burn injury. This retrospective cohort study used nationally collected data over a 10-year period (2001 to 2010), from the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SIDs). Patients hospitalized for burn injury (n = 57,968) were identified by ICD-9-CM codes and were examined to see specifically if they were discharged immediately to inpatient rehabilitation after hospitalization (primary endpoint). Both unadjusted and adjusted likelihoods were calculated for each state taking into account the effects of age, insurance status, hospitalization at a burn center, and extent of burn injury by TBSA. The relative risk of discharge to inpatient rehabilitation varied by as much as 6-fold among different states. Higher TBSA, having health insurance, higher age, and burn center hospitalization all increased the likelihood of discharge to inpatient rehabilitation following acute care hospitalization. There was significant variation between states in inpatient rehabilitation utilization after adjusting for variables known to affect each outcome. Future efforts should be focused on identifying the cause of this state-to-state variation, its relationship to patient outcome, and standardizing treatment across the United States.