905 resultados para sampling effort
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.
Resumo:
BACKGROUND: Stress-related hypercoagulability might link job stress with atherosclerosis. PURPOSE: This paper aims to study whether overcommitment, effort-reward imbalance, and the overcommitment by effort-reward imbalance interaction relate to an exaggerated procoagulant stress response. METHODS: We assessed job stress in 52 healthy teachers (49 +/- 8 years, 63% women) at study entry and, after a mean follow-up of 21 +/- 4 months, when they underwent an acute psychosocial stressor and had coagulation measures determined in plasma. In order to increase the reliability of job stress measures, entry and follow-up scores of overcommitment and of effort-reward imbalance were added up to total scores. RESULTS: During recovery from stress, elevated overcommitment correlated with D-dimer increase and with smaller fibrinogen decrease. In contrast, overcommitment was not associated with coagulation changes from pre-stress to immediately post-stress. Effort-reward imbalance and the interaction between overcommitment and effort-reward imbalance did not correlate with stress-induced changes in coagulation measures. CONCLUSIONS: Overcommitment predicted acute stress-induced hypercoagulability, particularly during the recovery period.
Impact of Orthorectification and Spatial Sampling on Maximum NDVI Composite Data in Mountain Regions
Resumo:
This review of late-Holocene palaeoclimatology represents the results from a PAGES/CLIVAR Intersection Panel meeting that took place in June 2006. The review is in three parts: the principal high-resolution proxy disciplines (trees, corals, ice cores and documentary evidence), emphasizing current issues in their use for climate reconstruction; the various approaches that have been adopted to combine multiple climate proxy records to provide estimates of past annual-to-decadal timescale Northern Hemisphere surface temperatures and other climate variables, such as large-scale circulation indices; and the forcing histories used in climate model simulations of the past millennium. We discuss the need to develop a framework through which current and new approaches to interpreting these proxy data may be rigorously assessed using pseudo-proxies derived from climate model runs, where the `answer' is known. The article concludes with a list of recommendations. First, more raw proxy data are required from the diverse disciplines and from more locations, as well as replication, for all proxy sources, of the basic raw measurements to improve absolute dating, and to better distinguish the proxy climate signal from noise. Second, more effort is required to improve the understanding of what individual proxies respond to, supported by more site measurements and process studies. These activities should also be mindful of the correlation structure of instrumental data, indicating which adjacent proxy records ought to be in agreement and which not. Third, large-scale climate reconstructions should be attempted using a wide variety of techniques, emphasizing those for which quantified errors can be estimated at specified timescales. Fourth, a greater use of climate model simulations is needed to guide the choice of reconstruction techniques (the pseudo-proxy concept) and possibly help determine where, given limited resources, future sampling should be concentrated.
Resumo:
In this paper we first show that the gains achievable by integrating pricing and inventory control are usually small for classical demand functions. We then introduce reference price models and demonstrate that for this class of demand functions the benefits of integration with inventory control are substantially increased due to the price dynamics. We also provide some analytical results for this more complex model. We thus conclude that integrated pricing/inventory models could repeat the success of revenue management in practice if reference price effects are included in the demand model and the properties of this new model are better understood.
Resumo:
In this paper we analyze a dynamic agency problem where contracting parties do not know the agent's future productivity at the beginning of the relationship. We consider a two-period model where both the agent and the principal observe the agent's second-period productivity at the end of the first period. This observation is assumed to be non-verifiable information. We compare long-term contracts with short-term contracts with respect to their suitability to motivate effort in both periods. On the one hand, short-term contracts allow for a better fine-tuning of second-period incentives as they can be aligned with the agent's second-period productivity. On the other hand, in short-term contracts first-period effort incentives might be distorted as contracts have to be sequentially optimal. Hence, the difference between long-term and short-term contracts is characterized by a trade-off between inducing effort in the first and in the second period. We analyze the determinants of this trade-off and demonstrate its implications for performance measurement and information system design.
Resumo:
Determination of somatic cell count (SCC) is used worldwide in dairy practice to describe the hygienic status of the milk and the udder health of cows. When SCC is tested on a quarter level to detect single quarters with high SCC levels of cows for practical reasons, mostly foremilk samples after prestimulation (i.e. cleaning of the udder) are used. However, SCC is usually different in different milk fractions. Therefore, the goal of this study was the investigation of the use of foremilk samples for the estimation of total quarter SCC. A total of 378 milkings in 19 dairy cows were performed with a special milking device to drain quarter milk separately. Foremilk samples were taken after udder stimulation and before cluster attachment. SCC was measured in foremilk samples and in total quarter milk. Total quarter milk SCC could not be predicted precisely from foremilk SCC measurements. At relatively high foremilk SCC levels (>300 x 10(3) cells/ml) foremilk SCC were higher than total quarter milk. At around (50-300) x 10(3) cells/ml foremilk and total quarter SCC did not differ considerably. Most interestingly, if foremilk SCC was lower than 50 x 10(3) cells/ml the total quarter SCC was higher than foremilk SCC. In addition, individual cows showed dramatic variations in foremilk SCC that were not very well related to total quarter milk SCC. In conclusion, foremilk samples are useful to detect high quarter milk SCC to recognize possibly infected quarters, only if precise cell counts are not required. However, foremilk samples can be deceptive if very low cell numbers are to be detected.
Resumo:
Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes.
Resumo:
In all European Union countries, chemical residues are required to be routinely monitored in meat. Good farming and veterinary practice can prevent the contamination of meat with pharmaceutical substances, resulting in a low detection of drug residues through random sampling. An alternative approach is to target-monitor farms suspected of treating their animals with antimicrobials. The objective of this project was to assess, using a stochastic model, the efficiency of these two sampling strategies. The model integrated data on Swiss livestock as well as expert opinion and results from studies conducted in Switzerland. Risk-based sampling showed an increase in detection efficiency of up to 100% depending on the prevalence of contaminated herds. Sensitivity analysis of this model showed the importance of the accuracy of prior assumptions for conducting risk-based sampling. The resources gained by changing from random to risk-based sampling should be transferred to improving the quality of prior information.
Resumo:
In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset - the period 1989-2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.