865 resultados para Hierarchical sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determination of somatic cell count (SCC) is used worldwide in dairy practice to describe the hygienic status of the milk and the udder health of cows. When SCC is tested on a quarter level to detect single quarters with high SCC levels of cows for practical reasons, mostly foremilk samples after prestimulation (i.e. cleaning of the udder) are used. However, SCC is usually different in different milk fractions. Therefore, the goal of this study was the investigation of the use of foremilk samples for the estimation of total quarter SCC. A total of 378 milkings in 19 dairy cows were performed with a special milking device to drain quarter milk separately. Foremilk samples were taken after udder stimulation and before cluster attachment. SCC was measured in foremilk samples and in total quarter milk. Total quarter milk SCC could not be predicted precisely from foremilk SCC measurements. At relatively high foremilk SCC levels (>300 x 10(3) cells/ml) foremilk SCC were higher than total quarter milk. At around (50-300) x 10(3) cells/ml foremilk and total quarter SCC did not differ considerably. Most interestingly, if foremilk SCC was lower than 50 x 10(3) cells/ml the total quarter SCC was higher than foremilk SCC. In addition, individual cows showed dramatic variations in foremilk SCC that were not very well related to total quarter milk SCC. In conclusion, foremilk samples are useful to detect high quarter milk SCC to recognize possibly infected quarters, only if precise cell counts are not required. However, foremilk samples can be deceptive if very low cell numbers are to be detected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In all European Union countries, chemical residues are required to be routinely monitored in meat. Good farming and veterinary practice can prevent the contamination of meat with pharmaceutical substances, resulting in a low detection of drug residues through random sampling. An alternative approach is to target-monitor farms suspected of treating their animals with antimicrobials. The objective of this project was to assess, using a stochastic model, the efficiency of these two sampling strategies. The model integrated data on Swiss livestock as well as expert opinion and results from studies conducted in Switzerland. Risk-based sampling showed an increase in detection efficiency of up to 100% depending on the prevalence of contaminated herds. Sensitivity analysis of this model showed the importance of the accuracy of prior assumptions for conducting risk-based sampling. The resources gained by changing from random to risk-based sampling should be transferred to improving the quality of prior information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS:Duchenne muscular dystrophy (DMD) is a muscle disease with serious cardiac complications. Changes in Ca(2+) homeostasis and oxidative stress were recently associated with cardiac deterioration, but the cellular pathophysiological mechanisms remain elusive. We investigated whether the activity of ryanodine receptor (RyR) Ca(2+) release channels is affected, whether changes in function are cause or consequence and which post-translational modifications drive disease progression. METHODS AND RESULTS:Electrophysiological, imaging, and biochemical techniques were used to study RyRs in cardiomyocytes from mdx mice, an animal model of DMD. Young mdx mice show no changes in cardiac performance, but do so after ∼8 months. Nevertheless, myocytes from mdx pups exhibited exaggerated Ca(2+) responses to mechanical stress and 'hypersensitive' excitation-contraction coupling, hallmarks of increased RyR Ca(2+) sensitivity. Both were normalized by antioxidants, inhibitors of NAD(P)H oxidase and CaMKII, but not by NO synthases and PKA antagonists. Sarcoplasmic reticulum Ca(2+) load and leak were unchanged in young mdx mice. However, by the age of 4-5 months and in senescence, leak was increased and load was reduced, indicating disease progression. By this age, all pharmacological interventions listed above normalized Ca(2+) signals and corrected changes in ECC, Ca(2+) load, and leak. CONCLUSION:Our findings suggest that increased RyR Ca(2+) sensitivity precedes and presumably drives the progression of dystrophic cardiomyopathy, with oxidative stress initiating its development. RyR oxidation followed by phosphorylation, first by CaMKII and later by PKA, synergistically contributes to cardiac deterioration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic and robust approach for landmarking and segmentation of both pelvis and femur in a conventional AP X-ray. Our approach is based on random forest regression and hierarchical sparse shape composition. Experiments conducted on 436 clinical AP pelvis x-rays show that our approach achieves an average point-to-curve error around 1.3 mm for femur and 2.2 mm for pelvis, both with success rates around 98%. Compared to existing methods, our approach exhibits better performance in both the robustness and the accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces an extended hierarchical task analysis (HTA) methodology devised to evaluate and compare user interfaces on volumetric infusion pumps. The pumps were studied along the dimensions of overall usability and propensity for generating human error. With HTA as our framework, we analyzed six pumps on a variety of common tasks using Norman’s Action theory. The introduced method of evaluation divides the problem space between the external world of the device interface and the user’s internal cognitive world, allowing for predictions of potential user errors at the human-device level. In this paper, one detailed analysis is provided as an example, comparing two different pumps on two separate tasks. The results demonstrate the inherent variation, often the cause of usage errors, found with infusion pumps being used in hospitals today. The reported methodology is a useful tool for evaluating human performance and predicting potential user errors with infusion pumps and other simple medical devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present results from an intercomparison program of CO2, δ(O2/N2) and δ13CO2 measurements from atmospheric flask samples. Flask samples are collected on a bi-weekly basis at the High Altitude Research Station Jungfraujoch in Switzerland for three European laboratories: the University of Bern, Switzerland, the University of Groningen, the Netherlands and the Max Planck Institute for Biogeochemistry in Jena, Germany. Almost 4 years of measurements of CO2, δ(O2/N2) and δ13CO2 are compared in this paper to assess the measurement compatibility of the three laboratories. While the average difference for the CO2 measurements between the laboratories in Bern and Jena meets the required compatibility goal as defined by the World Meteorological Organization, the standard deviation of the average differences between all laboratories is not within the required goal. However, the obtained annual trend and seasonalities are the same within their estimated uncertainties. For δ(O2/N2) significant differences are observed between the three laboratories. The comparison for δ13CO2 yields the least compatible results and the required goals are not met between the three laboratories. Our study shows the importance of regular intercomparison exercises to identify potential biases between laboratories and the need to improve the quality of atmospheric measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous research has shown that motion imagery draws on the same neural circuits that are involved in perception of motion, thus leading to a motion aftereffect (Winawer et al., 2010). Imagined stimuli can induce a similar shift in participants’ psychometric functions as neural adaptation due to a perceived stimulus. However, these studies have been criticized on the grounds that they fail to exclude the possibility that the subjects might have guessed the experimental hypothesis, and behaved accordingly (Morgan et al., 2012). In particular, the authors claim that participants can adopt arbitrary response criteria, which results in similar changes of the central tendency μ of psychometric curves as those shown by Winawer et al. (2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ecological networks are typically complex constructions of species and their interactions. During the last decade, the study of networks has moved from static to dynamic analyses, and has attained a deeper insight into their internal structure, heterogeneity, and temporal and spatial resolution. Here, we review, discuss and suggest research lines in the study of the spatio-temporal heterogeneity of networks and their hierarchical nature. We use case study data from two well-characterized model systems (the food web in Broadstone Stream in England and the pollination network at Zackenberg in Greenland), which are complemented with additional information from other studies. We focus upon eight topics: temporal dynamic space-for-time substitutions linkage constraints habitat borders network modularity individual-based networks invasions of networks and super networks that integrate different network types. Few studies have explicitly examined temporal change in networks, and we present examples that span from daily to decadal change: a common pattern that we see is a stable core surrounded by a group of dynamic, peripheral species, which, in pollinator networks enter the web via preferential linkage to the most generalist species. To some extent, temporal and spatial scales are interchangeable (i.e. networks exhibit ‘ergodicity’) and we explore how space-for-time substitutions can be used in the study of networks. Network structure is commonly constrained by phenological uncoupling (a temporal phenomenon), abundance, body size and population structure. Some potential links are never observed, that is they are ‘forbidden’ (fully constrained) or ‘missing’ (a sampling effect), and their absence can be just as ecologically significant as their presence. Spatial habitat borders can add heterogeneity to network structure, but their importance has rarely been studied: we explore how habitat generalization can be related to other resource dimensions. Many networks are hierarchically structured, with modules forming the basic building blocks, which can result in self-similarity. Scaling down from networks of species reveals another, finer-grained level of individual-based organization, the ecological consequences of which have yet to be fully explored. The few studies of individual-based ecological networks that are available suggest the potential for large intraspecific variance and, in the case of food webs, strong size-structuring. However, such data are still scarce and more studies are required to link individual-level and species-level networks. Invasions by alien species can be tracked by following the topological ‘career’ of the invader as it establishes itself within a network, with potentially important implications for conservation biology. Finally, by scaling up to a higher level of organization, it is possible to combine different network types (e.g. food webs and mutualistic networks) to form super networks, and this new approach has yet to be integrated into mainstream ecological research. We conclude by listing a set of research topics that we see as emerging candidates for ecological network studies in the near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Time-based localization techniques such as multilateration are favoured for positioning to wide-band signals. Applying the same techniques with narrow-band signals such as GSM is not so trivial. The process is challenged by the needs of synchronization accuracy and timestamp resolution both in the nanoseconds range. We propose approaches to deal with both challenges. On the one hand, we introduce a method to eliminate the negative effect of synchronization offset on time measurements. On the other hand, we propose timestamps with nanoseconds accuracy by using timing information from the signal processing chain. For a set of experiments, ranging from sub-urban to indoor environments, we show that our proposed approaches are able to improve the localization accuracy of TDOA approaches by several factors. We are even able to demonstrate errors as small as 10 meters for outdoor settings with narrow-band signals.