891 resultados para Resource use
Resumo:
Maize demand for food, livestock feed, and biofuel is expected to increase substantially. The Western U.S. Corn Belt accounts for 23% of U.S. maize production, and irrigated maize accounts for 43 and 58% of maize land area and total production, respectively, in this region. The most sensitive parameters (yield potential [YP], water-limited yield potential [YP-W], yield gap between actual yield and YP, and resource-use efficiency) governing performance of maize systems in the region are lacking. A simulation model was used to quantify YP under irrigated and rainfed conditions based on weather data, soil properties, and crop management at 18 locations. In a separate study, 5-year soil water data measured in central Nebraska were used to analyze soil water recharge during the non-growing season because soil water content at sowing is a critical component of water supply available for summer crops. On-farm data, including yield, irrigation, and nitrogen (N) rate for 777 field-years, was used to quantify size of yield gaps and evaluate resource-use efficiency. Simulated average YP and YP-W were 14.4 and 8.3 Mg ha-1, respectively. Geospatial variation of YP was associated with solar radiation and temperature during post-anthesis phase while variation in water-limited yield was linked to the longitudinal variation in seasonal rainfall and evaporative demand. Analysis of soil water recharge indicates that 80% of variation in soil water content at sowing can be explained by precipitation during non-growing season and residual soil water at end of previous growing season. A linear relationship between YP-W and water supply (slope: 19.3 kg ha-1 mm-1; x-intercept: 100 mm) can be used as a benchmark to diagnose and improve farmer’s water productivity (WP; kg grain per unit of water supply). Evaluation of data from farmer’s fields provides proof-of-concept and helps identify management constraints to high levels of productivity and resource-use efficiency. On average, actual yields of irrigated maize systems were 11% below YP. WP and N-fertilizer use efficiency (NUE) were high despite application of large amounts of irrigation water and N fertilizer (14 kg grain mm-1 water supply and 71 kg grain kg-1 N fertilizer). While there is limited scope for substantial increases in actual average yields, WP and NUE can be further increased by: (1) switching surface to pivot systems, (2) using conservation instead of conventional tillage systems in soybean-maize rotations, (3) implementation of irrigation schedules based on crop water requirements, and (4) better N fertilizer management.
Resumo:
Studies of consumer-resource interactions suggest that individual diet specialisation is empirically widespread and theoretically important to the organisation and dynamics of populations and communities. We used weighted networks to analyze the resource use by sea otters, testing three alternative models for how individual diet specialisation may arise. As expected, individual specialisation was absent when otter density was low, but increased at high-otter density. A high-density emergence of nested resource-use networks was consistent with the model assuming individuals share preference ranks. However, a density-dependent emergence of a non-nested modular network for core resources was more consistent with the competitive refuge model. Individuals from different diet modules showed predictable variation in rank-order prey preferences and handling times of core resources, further supporting the competitive refuge model. Our findings support a hierarchical organisation of diet specialisation and suggest individual use of core and marginal resources may be driven by different selective pressures.
Resumo:
1. A long-standing question in ecology is how natural populations respond to a changing environment. Emergent optimal foraging theory-based models for individual variation go beyond the population level and predict how its individuals would respond to disturbances that produce changes in resource availability. 2. Evaluating variations in resource use patterns at the intrapopulation level in wild populations under changing environmental conditions would allow to further advance in the research on foraging ecology and evolution by gaining a better idea of the underlying mechanisms explaining trophic diversity. 3. In this study, we use a large spatio-temporal scale data set (western continental Europe, 19682006) on the diet of Bonellis Eagle Aquila fasciata breeding pairs to analyse the predator trophic responses at the intrapopulation level to a prey population crash. In particular, we borrow metrics from studies on network structure and intrapopulation variation to understand how an emerging infectious disease [the rabbit haemorrhagic disease (RHD)] that caused the density of the eagles primary prey (rabbit Oryctolagus cuniculus) to dramatically drop across Europe impacted on resource use patterns of this endangered raptor. 4. Following the major RHD outbreak, substantial changes in Bonellis Eagles diet diversity and organisation patterns at the intrapopulation level took place. Dietary variation among breeding pairs was larger after than before the outbreak. Before RHD, there were no clusters of pairs with similar diets, but significant clustering emerged after RHD. Moreover, diets at the pair level presented a nested pattern before RHD, but not after. 5. Here, we reveal how intrapopulation patterns of resource use can quantitatively and qualitatively vary, given drastic changes in resource availability. 6. For the first time, we show that a pathogen of a prey species can indirectly impact the intrapopulation patterns of resource use of an endangered predator.
Resumo:
Changes in resource use over time can provide insight into technological choice and the extent of long-term stability in cultural practices. In this paper we re-evaluate the evidence for a marked demographic shift at the inception of the Early Iron Age at Troy by applying a robust macroscale analysis of changing ceramic resource use over the Late Bronze and Iron Age. We use a combination of new and legacy analytical datasets (NAA and XRF), from excavated ceramics, to evaluate the potential compositional range of local resources (based on comparisons with sediments from within a 10 km site radius). Results show a clear distinction between sediment-defined local and non-local ceramic compositional groups. Two discrete local ceramic resources have been previously identified and we confirm a third local resource for a major class of EIA handmade wares and cooking pots. This third source appears to derive from a residual resource on the Troy peninsula (rather than adjacent alluvial valleys). The presence of a group of large and heavy pithoi among the non-local groups raises questions about their regional or maritime origin. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Sustainable natural resource use requires that multiple actors reassess their situation in a systemic perspective. This can be conceptualised as a social learning process between actors from rural communities and the experts from outside organisations. A specifically designed workshop oriented towards a systemic view of natural resource use and the enhancement of mutual learning between local and external actors, provided the background for evaluating the potentials and constraints of intensified social learning processes. Case studies in rural communities in India, Bolivia, Peru and Mali showed that changes in the narratives of the participants of the workshop followed a similar temporal sequence relatively independently from their specific contexts. Social learning processes were found to be more likely to be successful if they 1) opened new space for communicative action, allowing for an intersubjective re-definition of the present situation, 2) contributed to rebalance the relationships between social capital and social, emotional and cognitive competencies within and between local and external actors.
Resumo:
OBJECTIVE: To determine the impact of a community based Helicobacter pylori screening and eradication programme on the incidence of dyspepsia, resource use, and quality of life, including a cost consequences analysis. DESIGN: H pylori screening programme followed by randomised placebo controlled trial of eradication. SETTING: Seven general practices in southwest England. PARTICIPANTS: 10,537 unselected people aged 20-59 years were screened for H pylori infection (13C urea breath test); 1558 of the 1636 participants who tested positive were randomised to H pylori eradication treatment or placebo, and 1539 (99%) were followed up for two years. INTERVENTION: Ranitidine bismuth citrate 400 mg and clarithromycin 500 mg twice daily for two weeks or placebo. MAIN OUTCOME MEASURES: Primary care consultation rates for dyspepsia (defined as epigastric pain) two years after randomisation, with secondary outcomes of dyspepsia symptoms, resource use, NHS costs, and quality of life. RESULTS: In the eradication group, 35% fewer participants consulted for dyspepsia over two years compared with the placebo group (55/787 v 78/771; odds ratio 0.65, 95% confidence interval 0.46 to 0.94; P = 0.021; number needed to treat 30) and 29% fewer participants had regular symptoms (odds ratio 0.71, 0.56 to 0.90; P = 0.05). NHS costs were 84.70 pounds sterling (74.90 pounds sterling to 93.91 pounds sterling) greater per participant in the eradication group over two years, of which 83.40 pounds sterling (146 dollars; 121 euro) was the cost of eradication treatment. No difference in quality of life existed between the two groups. CONCLUSIONS: Community screening and eradication of H pylori is feasible in the general population and led to significant reductions in the number of people who consulted for dyspepsia and had symptoms two years after treatment. These benefits have to be balanced against the costs of eradication treatment, so a targeted eradication strategy in dyspeptic patients may be preferable.
Resumo:
OBJECTIVE: To evaluate the effectiveness of a practice nurse led strategy to improve the notification and treatment of partners of people with chlamydia infection. DESIGN: Randomised controlled trial. SETTING: 27 general practices in the Bristol and Birmingham areas. PARTICIPANTS: 140 men and women with chlamydia (index cases) diagnosed by screening of a home collected urine sample or vulval swab specimen. INTERVENTIONS: Partner notification at the general practice immediately after diagnosis by trained practice nurses, with telephone follow up by a health adviser; or referral to a specialist health adviser at a genitourinary medicine clinic. MAIN OUTCOME MEASURES: Primary outcome was the proportion of index cases with at least one treated sexual partner. Specified secondary outcomes included the number of sexual contacts elicited during a sexual history, positive test result for chlamydia six weeks after treatment, and the cost of each strategy in 2003 sterling prices. RESULTS: 65.3% (47/72) of participants receiving practice nurse led partner notification had at least one partner treated compared with 52.9% (39/68) of those referred to a genitourinary medicine clinic (risk difference 12.4%, 95% confidence interval -1.8% to 26.5%). Of 68 participants referred to the clinic, 21 (31%) did not attend. The costs per index case were 32.55 pounds sterling for the practice nurse led strategy and 32.62 pounds sterling for the specialist referral strategy. CONCLUSION: Practice based partner notification by trained nurses with telephone follow up by health advisers is at least as effective as referral to a specialist health adviser at a genitourinary medicine clinic, and costs the same. Trial registration Clinical trials: NCT00112255.
Resumo:
OBJECTIVE: To examine variability in outcome and resource use between ICUs. Secondary aims: to assess whether outcome and resource use are related to ICU structure and process, to explore factors associated with efficient resource use. DESIGN AND SETTING: Cohort study, based on the SAPS 3 database in 275 ICUs worldwide. PATIENTS: 16,560 adults. MEASUREMENTS AND RESULTS: Outcome was defined by standardized mortality rate (SMR). Standardized resource use (SRU) was calculated based on length of stay in the ICU, adjusted for severity of acute illness. Each unit was assigned to one of four groups: "most efficient" (SMR and SRU < median); "least efficient" (SMR, SRU > median); "overachieving" (low SMR, high SRU), "underachieving" (high SMR, low SRU). Univariate analysis and stepwise logistic regression were used to test for factors separating "most" from "least efficient" units. Overall median SMR was 1.00 (IQR 0.77-1.28) and SRU 1.07 (0.76-1.58). There were 91 "most efficient", 91 "least efficient", 47 "overachieving", and 46 "underachieving" ICUs. Number of physicians, of full-time specialists, and of nurses per bed, clinical rounds, availability of physicians, presence of emergency department, and geographical region were significant in univariate analysis. In multivariate analysis only interprofessional rounds, emergency department, and geographical region entered the model as significant. CONCLUSIONS: Despite considerable variability in outcome and resource use only few factors of ICU structure and process were associated with efficient use of ICU. This suggests that other confounding factors play an important role.
Resumo:
INTRODUCTION: The paucity of data on resource use in critically ill patients with hematological malignancy and on these patients' perceived poor outcome can lead to uncertainty over the extent to which intensive care treatment is appropriate. The aim of the present study was to assess the amount of intensive care resources needed for, and the effect of treatment of, hemato-oncological patients in the intensive care unit (ICU) in comparison with a nononcological patient population with a similar degree of organ dysfunction. METHODS: A retrospective cohort study of 101 ICU admissions of 84 consecutive hemato-oncological patients and 3,808 ICU admissions of 3,478 nononcological patients over a period of 4 years was performed. RESULTS: As assessed by Therapeutic Intervention Scoring System points, resource use was higher in hemato-oncological patients than in nononcological patients (median (interquartile range), 214 (102 to 642) versus 95 (54 to 224), P < 0.0001). Severity of disease at ICU admission was a less important predictor of ICU resource use than necessity for specific treatment modalities. Hemato-oncological patients and nononcological patients with similar admission Simplified Acute Physiology Score scores had the same ICU mortality. In hemato-oncological patients, improvement of organ function within the first 48 hours of the ICU stay was the best predictor of 28-day survival. CONCLUSION: The presence of a hemato-oncological disease per se is associated with higher ICU resource use, but not with increased mortality. If withdrawal of treatment is considered, this decision should not be based on admission parameters but rather on the evolutional changes in organ dysfunctions.
Resumo:
In the Lower Mekon Basin the extraordinary pace of economic development and growth contradicts with environmental protection. On base of the Watershed Classification Project (WSCP) and the inclusion of a DTM for the entire LMB the potential degradation risk was derived for each land unit. The risks were grouped into five classes, where classes one and two are considered critical with regard to soil erosion when the land is cleared of natural resources. For practical use the database has an enormous potential for further spatial analysis in combination with other datasets, as for example the NCCR North-South uses the WSCP within two research projects.
Resumo:
Objective. This research study had two goals: (1) to describe resource consumption patterns for Medi-Cal children with cystic fibrosis, and (2) to explore the feasibility from a rate design perspective of developing specialized managed care plans for such a special needs population.^ Background. Children with special health care needs (CSHN) comprise about 2% of the California Medicaid pediatric population. CSHN have rare but serious health problems, such as cystic fibrosis. Medicaid programs, including Medi-Cal, are enrolling more and more beneficiaries in managed care to control costs. CSHN, however, do not fit the wellness model underlying most managed care plans. Child health advocates believe that both efficiency and quality will suffer if CSHN are removed from regionalized special care centers and scattered among general purpose plans. They believe that CSHN should be "carved out" from enrollment in general plans. One alternative is the Specialized Managed Care Plan, tailored for CSHN.^ Methods. The study population consisted of children under age 21 with CF who were eligible for Medi-Cal and California Children's Services program (CCS) during 1991. Health Care Financing Administration (HCFA) Medicaid Tape-to-Tape data were analyzed as part of a California Children's Hospital Association (CCHA) project.^ Results. Mean Medi-Cal expenditures per month enrolled were $2,302 for 457 CF children, compared to about \$1,270 for all 47,000 CCS special needs children and roughly $60 for almost 2.6 million ``regular needs'' children. For CF children, inpatient care (80\%) and outpatient drugs (9\%) were the major cost drivers, with {\it all\/} outpatient visits comprising only 2\% of expenditures. About one-third of CF children were eligible due to AFDC (Aid to Families with Dependent Children). Age group explained about 17\% of all expenditure variation. Regression analysis was used to select the best capitation rate structure (rate cells by age and eligibility group). Sensitivity analysis estimated moderate financial risk for a statewide plan (360 enrollees), but severe risk for single county implementation due to small numbers of children.^ Conclusions. Study results support the carve out of CSHN due to unique expenditure patterns. The Specialized Managed Care Plan concept appears feasible from a rate design perspective given sufficient enrollees. ^