133 resultados para image noise modeling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: To test the hypothesis that intervals with superior beat-to-beat coronary artery repositioning precision exist in the cardiac cycle, to design a coronary MR angiography (MRA) methodology in response, and to ascertain its performance. METHODS: Coronary repositioning precision in consecutive heartbeats was measured on x-ray coronary angiograms of 17 patients and periods with the highest repositioning precision were identified. In response, the temporal order of coronary MRA pulse sequence elements required modification and the T2 -prep now follows (T2 -post) rather than precedes the imaging part of the sequence. The performance of T2 -post was quantitatively compared (signal-to-noise [SNR], contrast-to-noise [CNR], vessel sharpness) to that of T2 -prep in vivo. RESULTS: Coronary repositioning precision is <1 mm at peak systole and in mid diastole. When comparing systolic T2 -post to diastolic T2 -prep, CNR and vessel sharpness remained unchanged (both P = NS) but SNR for muscle and blood increased by 104% and 36% (both P < 0.05), respectively. CONCLUSION: Windows with improved coronary repositioning precision exist in the cardiac cycle: one in peak systole and one in mid diastole. Peak-systolic imaging necessitates a re-design of conventional coronary MRA pulse sequences and leads to image quality very similar to that of conventional mid-diastolic data acquisition but improved SNR. J. Magn. Reson. Imaging 2015;41:1251-1258. © 2014 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Images obtained from high-throughput mass spectrometry (MS) contain information that remains hidden when looking at a single spectrum at a time. Image processing of liquid chromatography-MS datasets can be extremely useful for quality control, experimental monitoring and knowledge extraction. The importance of imaging in differential analysis of proteomic experiments has already been established through two-dimensional gels and can now be foreseen with MS images. We present MSight, a new software designed to construct and manipulate MS images, as well as to facilitate their analysis and comparison.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image quality in magnetic resonance imaging (MRI) is considerably affected by motion. Therefore, motion is one of the most common sources of artifacts in contemporary cardiovascular MRI. Such artifacts in turn may easily lead to misinterpretations in the images and a subsequent loss in diagnostic quality. Hence, there is considerable research interest in strategies that help to overcome these limitations at minimal cost in time, spatial resolution, temporal resolution, and signal-to-noise ratio. This review summarizes and discusses the three principal sources of motion: the beating heart, the breathing lungs, and bulk patient movement. This is followed by a comprehensive overview of commonly used compensation strategies for these different types of motion. Finally, a summary and an outlook are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'image qu'un pays a dans le monde est importante à plusieurs titres. Elle peut soutenir la commercialisation de biens et de services exportés, elle revêt un caractère tout particulier dans le cadre des promotions touristique et économique et elle peut aussi être de nature à contribuer aux relations qu'un pays entretient avec d'autres pays aux niveaux politique, économique ou culturel. L'image de la Suisse a fait l'objet d'études dans de nombreux pays, dont les Etats-Unis, l'Allemagne et la Chine, auprès d'échantillons représentatifs de la population ainsi qu'auprès de groupes de leaders d'opinion et cet ouvrage présente de manière synthétique les principaux résultats de ces études. Après une description de l'image globale de la Suisse auprès des personnes interrogées et une analyse des associations faites à l'évocation de la Suisse, une partie importante est consacrée aux dimensions qui caractérisent l'image du pays en différenciant notamment entre les dimensions liées à la Suisse en tant qu'espace socioculturel et les dimensions liées aux aspects économiques. Pour terminer, un dernier chapitre analyse l'impact de faits ayant marqué l'actualité helvétique, comme le grounding de Swissair, sur l'image de la Suisse dans les pays étudiés.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.