830 resultados para Gradient-based approaches


Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Groundwater management depends on the knowledge on recharge rates and water fluxes within aquifers. The recharge is one of the water cycle components most difficult to estimate. As a result, despite the chosen method, the estimates are subject to uncertainties that can be identified by means of comparison with other approaches. In this study, groundwater recharge estimates based on the water balance in the unsaturated zone is assessed. Firstly, the approach is evaluated by comparing the results with those of another method. Then, the estimates are used as inputs in a transient groundwater flow model in order to assess how the water table would respond to the obtained recharges rates compared to measured levels. The results suggest a good performance of the adopted approach and, despite some inherent limitations, it has advantages over other methods since the data required are easier to obtain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim Conservation strategies are in need of predictions that capture spatial community composition and structure. Currently, the methods used to generate these predictions generally focus on deterministic processes and omit important stochastic processes and other unexplained variation in model outputs. Here we test a novel approach of community models that accounts for this variation and determine how well it reproduces observed properties of alpine butterfly communities. Location The western Swiss Alps. Methods We propose a new approach to process probabilistic predictions derived from stacked species distribution models (S-SDMs) in order to predict and assess the uncertainty in the predictions of community properties. We test the utility of our novel approach against a traditional threshold-based approach. We used mountain butterfly communities spanning a large elevation gradient as a case study and evaluated the ability of our approach to model species richness and phylogenetic diversity of communities. Results S-SDMs reproduced the observed decrease in phylogenetic diversity and species richness with elevation, syndromes of environmental filtering. The prediction accuracy of community properties vary along environmental gradient: variability in predictions of species richness was higher at low elevation, while it was lower for phylogenetic diversity. Our approach allowed mapping the variability in species richness and phylogenetic diversity projections. Main conclusion Using our probabilistic approach to process species distribution models outputs to reconstruct communities furnishes an improved picture of the range of possible assemblage realisations under similar environmental conditions given stochastic processes and help inform manager of the uncertainty in the modelling results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding and anticipating biological invasions can focus either on traits that favour species invasiveness or on features of the receiving communities, habitats or landscapes that promote their invasibility. Here, we address invasibility at the regional scale, testing whether some habitats and landscapes are more invasible than others by fitting models that relate alien plant species richness to various environmental predictors. We use a multi-model information-theoretic approach to assess invasibility by modelling spatial and ecological patterns of alien invasion in landscape mosaics and testing competing hypotheses of environmental factors that may control invasibility. Because invasibility may be mediated by particular characteristics of invasiveness, we classified alien species according to their C-S-R plant strategies. We illustrate this approach with a set of 86 alien species in Northern Portugal. We first focus on predictors influencing species richness and expressing invasibility and then evaluate whether distinct plant strategies respond to the same or different groups of environmental predictors. We confirmed climate as a primary determinant of alien invasions and as a primary environmental gradient determining landscape invasibility. The effects of secondary gradients were detected only when the area was sub-sampled according to predictions based on the primary gradient. Then, multiple predictor types influenced patterns of alien species richness, with some types (landscape composition, topography and fire regime) prevailing over others. Alien species richness responded most strongly to extreme land management regimes, suggesting that intermediate disturbance induces biotic resistance by favouring native species richness. Land-use intensification facilitated alien invasion, whereas conservation areas hosted few invaders, highlighting the importance of ecosystem stability in preventing invasions. Plants with different strategies exhibited different responses to environmental gradients, particularly when the variations of the primary gradient were narrowed by sub-sampling. Such differential responses of plant strategies suggest using distinct control and eradication approaches for different areas and alien plant groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: Climatic niche modelling of species and community distributions implicitly assumes strong and constant climatic determinism across geographic space. This assumption had however never been tested so far. We tested it by assessing how stacked-species distribution models (S-SDMs) perform for predicting plant species assemblages along elevation. Location: Western Swiss Alps. Methods: Using robust presence-absence data, we first assessed the ability of topo-climatic S-SDMs to predict plant assemblages in a study area encompassing a 2800 m wide elevation gradient. We then assessed the relationships among several evaluation metrics and trait-based tests of community assembly rules. Results: The standard errors of individual SDMs decreased significantly towards higher elevations. Overall, the S-SDM overpredicted far more than they underpredicted richness and could not reproduce the humpback curve along elevation. Overprediction was greater at low and mid-range elevations in absolute values but greater at high elevations when standardised by the actual richness. Looking at species composition, the evaluation metrics accounting for both the presence and absence of species (overall prediction success and kappa) or focusing on correctly predicted absences (specificity) increased with increasing elevation, while the metrics focusing on correctly predicted presences (Jaccard index and sensitivity) decreased. The best overall evaluation - as driven by specificity - occurred at high elevation where species assemblages were shown to be under significant environmental filtering of small plants. In contrast, the decreased overall accuracy in the lowlands was associated with functional patterns representing any type of assembly rule (environmental filtering, limiting similarity or null assembly). Main Conclusions: Our study reveals interesting patterns of change in S-SDM errors with changes in assembly rules along elevation. Yet, significant levels of assemblage prediction errors occurred throughout the gradient, calling for further improvement of SDMs, e.g., by adding key environmental filters that act at fine scales and developing approaches to account for variations in the influence of predictors along environmental gradients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive groundwater modeling requires accurate information about aquifer characteristics. Geophysical imaging is a powerful tool for delineating aquifer properties at an appropriate scale and resolution, but it suffers from problems of ambiguity. One way to overcome such limitations is to adopt a simultaneous multitechnique inversion strategy. We have developed a methodology for aquifer characterization based on structural joint inversion of multiple geophysical data sets followed by clustering to form zones and subsequent inversion for zonal parameters. Joint inversions based on cross-gradient structural constraints require less restrictive assumptions than, say, applying predefined petro-physical relationships and generally yield superior results. This approach has, for the first time, been applied to three geophysical data types in three dimensions. A classification scheme using maximum likelihood estimation is used to determine the parameters of a Gaussian mixture model that defines zonal geometries from joint-inversion tomograms. The resulting zones are used to estimate representative geophysical parameters of each zone, which are then used for field-scale petrophysical analysis. A synthetic study demonstrated how joint inversion of seismic and radar traveltimes and electrical resistance tomography (ERT) data greatly reduces misclassification of zones (down from 21.3% to 3.7%) and improves the accuracy of retrieved zonal parameters (from 1.8% to 0.3%) compared to individual inversions. We applied our scheme to a data set collected in northeastern Switzerland to delineate lithologic subunits within a gravel aquifer. The inversion models resolve three principal subhorizontal units along with some important 3D heterogeneity. Petro-physical analysis of the zonal parameters indicated approximately 30% variation in porosity within the gravel aquifer and an increasing fraction of finer sediments with depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents automated segmentation of structuresin the Head and Neck (H\&N) region, using an activecontour-based joint registration and segmentation model.A new atlas selection strategy is also used. Segmentationis performed based on the dense deformation fieldcomputed from the registration of selected structures inthe atlas image that have distinct boundaries, onto thepatient's image. This approach results in robustsegmentation of the structures of interest, even in thepresence of tumors, or anatomical differences between theatlas and the patient image. For each patient, an atlasimage is selected from the available atlas-database,based on the similarity metric value, computed afterperforming an affine registration between each image inthe atlas-database and the patient's image. Unlike manyof the previous approaches in the literature, thesimilarity metric is not computed over the entire imageregion; rather, it is computed only in the regions ofsoft tissue structures to be segmented. Qualitative andquantitative evaluation of the results is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Detection rates for adenoma and early colorectal cancer (CRC) are unsatisfactory due to low compliance towards invasive screening procedures such as colonoscopy. There is a large unmet screening need calling for an accurate, non-invasive and cost-effective test to screen for early neoplastic and pre-neoplastic lesions. Our goal is to identify effective biomarker combinations to develop a screening test aimed at detecting precancerous lesions and early CRC stages, based on a multigene assay performed on peripheral blood mononuclear cells (PBMC).Methods: A pilot study was conducted on 92 subjects. Colonoscopy revealed 21 CRC, 30 adenomas larger than 1 cm and 41 healthy controls. A panel of 103 biomarkers was selected by two approaches: a candidate gene approach based on literature review and whole transcriptome analysis of a subset of this cohort by Illumina TAG profiling. Blood samples were taken from each patient and PBMC purified. Total RNA was extracted and the 103 biomarkers were tested by multiplex RT-qPCR on the cohort. Different univariate and multivariate statistical methods were applied on the PCR data and 60 biomarkers, with significant p-value (< 0.01) for most of the methods, were selected.Results: The 60 biomarkers are involved in several different biological functions, such as cell adhesion, cell motility, cell signaling, cell proliferation, development and cancer. Two distinct molecular signatures derived from the biomarker combinations were established based on penalized logistic regression to separate patients without lesion from those with CRC or adenoma. These signatures were validated using bootstrapping method, leading to a separation of patients without lesion from those with CRC (Se 67%, Sp 93%, AUC 0.87) and from those with adenoma larger than 1cm (Se 63%, Sp 83%, AUC 0.77). In addition, the organ and disease specificity of these signatures was confirmed by means of patients with other cancer types and inflammatory bowel diseases.Conclusions: The two defined biomarker combinations effectively detect the presence of CRC and adenomas larger than 1 cm with high sensitivity and specificity. A prospective, multicentric, pivotal study is underway in order to validate these results in a larger cohort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For radiotherapy treatment planning of retinoblastoma inchildhood, Computed Tomography (CT) represents thestandard method for tumor volume delineation, despitesome inherent limitations. CT scan is very useful inproviding information on physical density for dosecalculation and morphological volumetric information butpresents a low sensitivity in assessing the tumorviability. On the other hand, 3D ultrasound (US) allows ahigh accurate definition of the tumor volume thanks toits high spatial resolution but it is not currentlyintegrated in the treatment planning but used only fordiagnosis and follow-up. Our ultimate goal is anautomatic segmentation of gross tumor volume (GTV) in the3D US, the segmentation of the organs at risk (OAR) inthe CT and the registration of both. In this paper, wepresent some preliminary results in this direction. Wepresent 3D active contour-based segmentation of the eyeball and the lens in CT images; the presented approachincorporates the prior knowledge of the anatomy by usinga 3D geometrical eye model. The automated segmentationresults are validated by comparing with manualsegmentations. Then, for the fusion of 3D CT and USimages, we present two approaches: (i) landmark-basedtransformation, and (ii) object-based transformation thatmakes use of eye ball contour information on CT and USimages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two likelihood ratio (LR) approaches are presented to evaluate the strength of evidence of MDMA tablet comparisons. The first one is based on a more 'traditional' comparison of MDMA tablets by using distance measures (e.g., Pearson correlation distance or a Euclidean distance). In this approach, LRs are calculated using the distribution of distances between tablets of the same-batch and that of different-batches. The second approach is based on methods used in some other fields of forensic comparison. Here LRs are calculated based on the distribution of values of MDMA tablet characteristics within a specific batch and from all batches. The data used in this paper must be seen as examples to illustrate both methods. In future research the methods can be applied to other and more complex data. In this paper, the methods and their results are discussed, considering their performance in evidence evaluation and several practical aspects. With respect to evidence in favor of the correct hypothesis, the second method proved to be better than the first one. It is shown that the LRs in same-batch comparisons are generally higher compared to the first method and the LRs in different-batch comparisons are generally lower. On the other hand, for operational purposes (where quick information is needed), the first method may be preferred, because it is less time consuming. With this method a model has to be estimated only once in a while, which means that only a few measurements have to be done, while with the second method more measurements are needed because each time a new model has to be estimated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-trial analysis of human electroencephalography (EEG) has been recently proposed for better understanding the contribution of individual subjects to a group-analysis effect as well as for investigating single-subject mechanisms. Independent Component Analysis (ICA) has been repeatedly applied to concatenated single-trial responses and at a single-subject level in order to extract those components that resemble activities of interest. More recently we have proposed a single-trial method based on topographic maps that determines which voltage configurations are reliably observed at the event-related potential (ERP) level taking advantage of repetitions across trials. Here, we investigated the correspondence between the maps obtained by ICA versus the topographies that we obtained by the single-trial clustering algorithm that best explained the variance of the ERP. To do this, we used exemplar data provided from the EEGLAB website that are based on a dataset from a visual target detection task. We show there to be robust correspondence both at the level of the activation time courses and at the level of voltage configurations of a subset of relevant maps. We additionally show the estimated inverse solution (based on low-resolution electromagnetic tomography) of two corresponding maps occurring at approximately 300 ms post-stimulus onset, as estimated by the two aforementioned approaches. The spatial distribution of the estimated sources significantly correlated and had in common a right parietal activation within Brodmann's Area (BA) 40. Despite their differences in terms of theoretical bases, the consistency between the results of these two approaches shows that their underlying assumptions are indeed compatible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Normal and abnormal brains can be segmented by registering the target image with an atlas. Here, an atlas is defined as the combination of an intensity image (template) and its segmented image (the atlas labels). After registering the atlas template and the target image, the atlas labels are propagated to the target image. We define this process as atlas-based segmentation. In recent years, researchers have investigated registration algorithms to match atlases to query subjects and also strategies for atlas construction. In this paper we present a review of the automated approaches for atlas-based segmentation of magnetic resonance brain images. We aim to point out the strengths and weaknesses of atlas-based methods and suggest new research directions. We use two different criteria to present the methods. First, we refer to the algorithms according to their atlas-based strategy: label propagation, multi-atlas methods, and probabilistic techniques. Subsequently, we classify the methods according to their medical target: the brain and its internal structures, tissue segmentation in healthy subjects, tissue segmentation in fetus, neonates and elderly subjects, and segmentation of damaged brains. A quantitative comparison of the results reported in the literature is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass spectrometry (MS) is currently the most sensitive and selective analytical technique for routine peptide and protein structure analysis. Top-down proteomics is based on tandem mass spectrometry (MS/ MS) of intact proteins, where multiply charged precursor ions are fragmented in the gas phase, typically by electron transfer or electron capture dissociation, to yield sequence-specific fragment ions. This approach is primarily used for the study of protein isoforms, including localization of post-translational modifications and identification of splice variants. Bottom-up proteomics is utilized for routine high-throughput protein identification and quantitation from complex biological samples. The proteins are first enzymatically digested into small (usually less than ca. 3 kDa) peptides, these are identified by MS or MS/MS, usually employing collisional activation techniques. To overcome the limitations of these approaches while combining their benefits, middle-down proteomics has recently emerged. Here, the proteins are digested into long (3-15 kDa) peptides via restricted proteolysis followed by the MS/MS analysis of the obtained digest. With advancements of high-resolution MS and allied techniques, routine implementation of the middle-down approach has been made possible. Herein, we present the liquid chromatography (LC)-MS/MS-based experimental design of our middle-down proteomic workflow coupled with post-LC supercharging.