100 resultados para Robustness
Resumo:
The temporal dynamics of species diversity are shaped by variations in the rates of speciation and extinction, and there is a long history of inferring these rates using first and last appearances of taxa in the fossil record. Understanding diversity dynamics critically depends on unbiased estimates of the unobserved times of speciation and extinction for all lineages, but the inference of these parameters is challenging due to the complex nature of the available data. Here, we present a new probabilistic framework to jointly estimate species-specific times of speciation and extinction and the rates of the underlying birth-death process based on the fossil record. The rates are allowed to vary through time independently of each other, and the probability of preservation and sampling is explicitly incorporated in the model to estimate the true lifespan of each lineage. We implement a Bayesian algorithm to assess the presence of rate shifts by exploring alternative diversification models. Tests on a range of simulated data sets reveal the accuracy and robustness of our approach against violations of the underlying assumptions and various degrees of data incompleteness. Finally, we demonstrate the application of our method with the diversification of the mammal family Rhinocerotidae and reveal a complex history of repeated and independent temporal shifts of both speciation and extinction rates, leading to the expansion and subsequent decline of the group. The estimated parameters of the birth-death process implemented here are directly comparable with those obtained from dated molecular phylogenies. Thus, our model represents a step towards integrating phylogenetic and fossil information to infer macroevolutionary processes.
Resumo:
Substantial investment in climate change research has led to dire predictions of the impacts and risks to biodiversity. The Intergovernmental Panel on Climate Change fourth assessment report(1) cites 28,586 studies demonstrating significant biological changes in terrestrial systems(2). Already high extinction rates, driven primarily by habitat loss, are predicted to increase under climate change(3-6). Yet there is little specific advice or precedent in the literature to guide climate adaptation investment for conserving biodiversity within realistic economic constraints(7). Here we present a systematic ecological and economic analysis of a climate adaptation problem in one of the world's most species-rich and threatened ecosystems: the South African fynbos. We discover a counterintuitive optimal investment strategy that switches twice between options as the available adaptation budget increases. We demonstrate that optimal investment is nonlinearly dependent on available resources, making the choice of how much to invest as important as determining where to invest and what actions to take. Our study emphasizes the importance of a sound analytical framework for prioritizing adaptation investments(4). Integrating ecological predictions in an economic decision framework will help support complex choices between adaptation options under severe uncertainty. Our prioritization method can be applied at any scale to minimize species loss and to evaluate the robustness of decisions to uncertainty about key assumptions.
Resumo:
PURPOSE: To compare different techniques for positive contrast imaging of susceptibility markers with MRI for three-dimensional visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. MATERIALS AND METHODS: Six different positive contrast techniques are investigated for their ability to image at 3 Tesla a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. RESULTS: The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided, and strengths and weaknesses of the different approaches are discussed. CONCLUSION: The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data are now available.
Resumo:
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.
Resumo:
A major issue in the application of waveform inversion methods to crosshole ground-penetrating radar (GPR) data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a recently published time-domain inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity of both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little to no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
To study the properties of human primary somatosensory (S1) cortex as well as its role in cognitive and social processes, it is necessary to noninvasively localize the cortical representations of the body. Being arguably the most relevant body parts for tactile exploration, cortical representations of fingers are of particular interest. The aim of the present study was to investigate the cortical representation of individual fingers (D1-D5), using human touch as a stimulus. Utilizing the high BOLD sensitivity and spatial resolution at 7T, we found that each finger is represented within three subregions of S1 in the postcentral gyrus. Within each of these three areas, the fingers are sequentially organized (from D1 to D5) in a somatotopic manner. Therefore, these finger representations likely reflect distinct activations of BAs 3b, 1, and 2, similar to those described in electrophysiological work in non-human primates. Quantitative analysis of the local BOLD responses revealed that within BA3b, each finger representation is specific to its own stimulation without any cross-finger responsiveness. This finger response selectivity was less prominent in BA 1 and in BA 2. A test-retest procedure highlighted the reproducibility of the results and the robustness of the method for BA 3b. Finally, the representation of the thumb was enlarged compared to the other fingers within BAs 1 and 2. These findings extend previous human electrophysiological and neuroimaging data but also reveal differences in the functional organization of S1 in human and nonhuman primates. Hum Brain Mapp 35:213-226, 2014. © 2012 Wiley Periodicals, Inc.
Resumo:
Plants such as Arabidopsis thaliana respond to foliar shade and neighbors who may become competitors for light resources by elongation growth to secure access to unfiltered sunlight. Challenges faced during this shade avoidance response (SAR) are different under a light-absorbing canopy and during neighbor detection where light remains abundant. In both situations, elongation growth depends on auxin and transcription factors of the phytochrome interacting factor (PIF) class. Using a computational modeling approach to study the SAR regulatory network, we identify and experimentally validate a previously unidentified role for long hypocotyl in far red 1, a negative regulator of the PIFs. Moreover, we find that during neighbor detection, growth is promoted primarily by the production of auxin. In contrast, in true shade, the system operates with less auxin but with an increased sensitivity to the hormonal signal. Our data suggest that this latter signal is less robust, which may reflect a cost-to-robustness tradeoff, a system trait long recognized by engineers and forming the basis of information theory.
Resumo:
In this paper, we present the segmentation of the headand neck lymph node regions using a new active contourbased atlas registration model. We propose to segment thelymph node regions without directly including them in theatlas registration process; instead, they are segmentedusing the dense deformation field computed from theregistration of the atlas structures with distinctboundaries. This approach results in robust and accuratesegmentation of the lymph node regions even in thepresence of significant anatomical variations between theatlas-image and the patient's image to be segmented. Wealso present a quantitative evaluation of lymph noderegions segmentation using various statistical as well asgeometrical metrics: sensitivity, specificity, dicesimilarity coefficient and Hausdorff distance. Acomparison of the proposed method with two other state ofthe art methods is presented. The robustness of theproposed method to the atlas selection, in segmenting thelymph node regions, is also evaluated.
Resumo:
Abstract We introduce a label-free technology based on digital holographic microscopy (DHM) with applicability for screening by imaging, and we demonstrate its capability for cytotoxicity assessment using mammalian living cells. For this first high content screening compatible application, we automatized a digital holographic microscope for image acquisition of cells using commercially available 96-well plates. Data generated through both label-free DHM imaging and fluorescence-based methods were in good agreement for cell viability identification and a Z'-factor close to 0.9 was determined, validating the robustness of DHM assay for phenotypic screening. Further, an excellent correlation was obtained between experimental cytotoxicity dose-response curves and known IC values for different toxic compounds. For comparable results, DHM has the major advantages of being label free and close to an order of magnitude faster than automated standard fluorescence microscopy.
Resumo:
This study aimed to assess the psychometric robustness of the French version of the Supportive Care Needs Survey and breast cancer (BC) module (SCNS-SF34-Fr and SCNS-BR8-Fr). Breast cancer patients were recruited in two hospitals (in Paris, France and Lausanne, Switzerland) either in ambulatory chemotherapy or radiotherapy, or surgery services. They were invited to complete the SCNS-SF34-Fr and SCNS-BR8-Fr as well as quality of life and patient satisfaction questionnaires. Three hundred and eighty-four (73% response rate) BC patients returned completed questionnaires. A five-factor model was confirmed for the SCNS-SF34-Fr with adequate goodness-of-fit indexes, although some items evidenced content redundancy, and a one-factor was identified for the SCNS-BR8-Fr. Internal consistency and test-retest estimates were satisfactory for most scales. The SCNS-SF34-Fr and SCNS-BR8-Fr scales demonstrated conceptual differences with the quality of life and satisfaction with care scales, highlighting the specific relevance of this assessment. Different levels of needs could be differentiated between groups of BC patients in terms of age and level of education (P < 0.001). The SCNS-SF34-Fr and SCNS-BR8-Fr present adequate psychometric properties despite some redundant items. These questionnaires allow for the crucial endeavour to design appropriate care services according to BC patients' characteristics.
Resumo:
PURPOSE: The longitudinal relaxation rate (R1 ) measured in vivo depends on the local microstructural properties of the tissue, such as macromolecular, iron, and water content. Here, we use whole brain multiparametric in vivo data and a general linear relaxometry model to describe the dependence of R1 on these components. We explore a) the validity of having a single fixed set of model coefficients for the whole brain and b) the stability of the model coefficients in a large cohort. METHODS: Maps of magnetization transfer (MT) and effective transverse relaxation rate (R2 *) were used as surrogates for macromolecular and iron content, respectively. Spatial variations in these parameters reflected variations in underlying tissue microstructure. A linear model was applied to the whole brain, including gray/white matter and deep brain structures, to determine the global model coefficients. Synthetic R1 values were then calculated using these coefficients and compared with the measured R1 maps. RESULTS: The model's validity was demonstrated by correspondence between the synthetic and measured R1 values and by high stability of the model coefficients across a large cohort. CONCLUSION: A single set of global coefficients can be used to relate R1 , MT, and R2 * across the whole brain. Our population study demonstrates the robustness and stability of the model. Magn Reson Med, 2014. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. Magn Reson Med 73:1309-1314, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
BACKGROUND: Among the many definitions of frailty, the frailty phenotype defined by Fried et al. is one of few constructs that has been repeatedly validated: first in the Cardiovascular Health Study (CHS) and subsequently in other large cohorts in the North America. In Europe, the Survey of Health, Aging and Retirement in Europe (SHARE) is a gold mine of individual, economic and health information that can provide insight into better understanding of frailty across diverse population settings. A recent adaptation of the original five CHS-frailty criteria was proposed to make use of SHARE data and measure frailty in the European population. To test the validity of the SHARE operationalized frailty phenotype, this study aims to evaluate its prospective association with adverse health outcomes. METHODS: Data are from 11,015 community-dwelling men and women aged 60+ participating in wave 1 and 2 of the Survey of Health, Aging and Retirement in Europe, a population-based survey. Multivariate logistic regression analyses were used to assess the 2-year follow up effect of SHARE-operationalized frailty phenotype on the incidence of disability (disability-free at baseline) and on worsening disability and morbidity, adjusting for age, sex, income and baseline morbidity and disability. RESULTS: At 2-year follow up, frail individuals were at increased risk for: developing mobility (OR 3.07, 95% CI, 1.02-9.36), IADL (OR 5.52, 95% CI, 3.76-8.10) and BADL (OR 5.13, 95% CI, 3.53-7.44) disability; worsening mobility (OR 2.94, 95% CI, 2.19- 3.93) IADL (OR 4.43, 95% CI, 3.19-6.15) and BADL disability (OR 4.53, 95% CI, 3.14-6.54); and worsening morbidity (OR 1.77, 95% CI, 1.35-2.32). These associations were significant even among the prefrail, but with a lower magnitude of effect. CONCLUSIONS: The SHARE-operationalized frailty phenotype is significantly associated with all tested health outcomes independent of baseline morbidity and disability in community-dwelling men and women aged 60 and older living in Europe. The robustness of results validate the use of this phenotype in the SHARE survey for future research on frailty in Europe.
Resumo:
The cross-recognition of peptides by cytotoxic T lymphocytes is a key element in immunology and in particular in peptide based immunotherapy. Here we develop three-dimensional (3D) quantitative structure-activity relationships (QSARs) to predict cross-recognition by Melan-A-specific cytotoxic T lymphocytes of peptides bound to HLA A*0201 (hereafter referred to as HLA A2). First, we predict the structure of a set of self- and pathogen-derived peptides bound to HLA A2 using a previously developed ab initio structure prediction approach [Fagerberg et al., J. Mol. Biol., 521-46 (2006)]. Second, shape and electrostatic energy calculations are performed on a 3D grid to produce similarity matrices which are combined with a genetic neural network method [So et al., J. Med. Chem., 4347-59 (1997)] to generate 3D-QSAR models. The models are extensively validated using several different approaches. During the model generation, the leave-one-out cross-validated correlation coefficient (q (2)) is used as the fitness criterion and all obtained models are evaluated based on their q (2) values. Moreover, the best model obtained for a partitioned data set is evaluated by its correlation coefficient (r = 0.92 for the external test set). The physical relevance of all models is tested using a functional dependence analysis and the robustness of the models obtained for the entire data set is confirmed using y-randomization. Finally, the validated models are tested for their utility in the setting of rational peptide design: their ability to discriminate between peptides that only contain side chain substitutions in a single secondary anchor position is evaluated. In addition, the predicted cross-recognition of the mono-substituted peptides is confirmed experimentally in chromium-release assays. These results underline the utility of 3D-QSARs in peptide mimetic design and suggest that the properties of the unbound epitope are sufficient to capture most of the information to determine the cross-recognition.
Resumo:
We propose a compressive sensing algorithm that exploits geometric properties of images to recover images of high quality from few measurements. The image reconstruction is done by iterating the two following steps: 1) estimation of normal vectors of the image level curves, and 2) reconstruction of an image fitting the normal vectors, the compressed sensing measurements, and the sparsity constraint. The proposed technique can naturally extend to nonlocal operators and graphs to exploit the repetitive nature of textured images to recover fine detail structures. In both cases, the problem is reduced to a series of convex minimization problems that can be efficiently solved with a combination of variable splitting and augmented Lagrangian methods, leading to fast and easy-to-code algorithms. Extended experiments show a clear improvement over related state-of-the-art algorithms in the quality of the reconstructed images and the robustness of the proposed method to noise, different kind of images, and reduced measurements.