997 resultados para parallel selection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Speciation reversal: the erosion of species differentiation via an increase in introgressive hybridization due to the weakening of previously divergent selection regimes, is thought to be an important, yet poorly understood, driver of biodiversity loss. Our study system, the Alpine whitefish (Coregonus spp.) species complex is a classic example of a recent postglacial adaptive radiation: forming an array of endemic lake flocks, with the independent origination of similar ecotypes among flocks. However, many of the lakes of the Alpine radiation have been seriously impacted by anthropogenic nutrient enrichment, resulting in a collapse in neutral genetic and phenotypic differentiation within the most polluted lakes. Here we investigate the effects of eutrophication on the selective forces that have shaped this radiation, using population genomics. We studied eight sympatric species assemblages belonging to five independent parallel adaptive radiations, and one species pair in secondary contact. We used AFLP markers, and applied FST outlier (BAYESCAN, DFDIST) and logistic regression analyses (MATSAM), to identify candidate regions for disruptive selection in the genome and their associations with adaptive traits within each lake flock. The number of outlier and adaptive trait associated loci identified per lake were then regressed against two variables (historical phosphorus concentration and contemporary oxygen concentration) representing the strength of eutrophication. Results: Whilst we identify disruptive selection candidate regions in all lake flocks, we find similar trends, across analysis methods, towards fewer disruptive selection candidate regions and fewer adaptive trait/candidate loci associations in the more polluted lakes. Conclusions: Weakened disruptive selection and a concomitant breakdown in reproductive isolating mechanisms in more polluted lakes has lead to increased gene flow between coexisting Alpine whitefish species. We hypothesize that the resulting higher rates of interspecific recombination reduce either the number or extent of genomic islands of divergence surrounding loci evolving under disruptive natural selection. This produces the negative trend seen in the number of selection candidate loci recovered during genome scans of whitefish species flocks, with increasing levels of anthropogenic eutrophication: as the likelihood decreases that AFLP restriction sites will fall within regions of heightened genomic divergence and therefore be classified as FST outlier loci. This study explores for the first time the potential effects of human-mediated relaxation of disruptive selection on heterogeneous genomic divergence between coexisting species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel phenotypic divergence in replicated adaptive radiations could either result from parallel genetic divergence in response to similar divergent selec- tion regimes or from equivalent phenotypically plastic response to the repeated occurrence of contrasting environments. In post-glacial fish, repli- cated divergence in phenotypes along the benthic-limnetic habitat axis is commonly observed. Here, we use two benthic-limnetic species pairs of whitefish from two Swiss lakes, raised in a common garden design, with reciprocal food treatments in one species pair, to experimentally measure whether feeding efficiency on benthic prey has a genetic basis or whether it underlies phenotypic plasticity (or both). To do so, we offered experimental fish mosquito larvae, partially burried in sand, and measured multiple feed- ing efficiency variables. Our results reveal both, genetic divergence as well as phenotypically plastic divergence in feeding efficiency, with the pheno- typically benthic species raised on benthic food being the most efficient forager on benthic prey. This indicates that both, divergent natural selection on genetically heritable traits and adaptive phenotypic plasticity, are likely important mechanisms driving phenotypic divergence in adaptive radiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function  f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of liver transplant candidates with hepatocellular carcinoma (HCC) is currently validated based on Milan criteria. The use of extended criteria has remained a matter of debate, mainly because of the absence of prospective validation. The present prospective study recruited patients according to the previously proposed Total Tumor Volume (TTV ≤115 cm(3) )/alpha fetoprotein (AFP ≤400 ng/ml) score. Patients with AFP >400 ng/ml were excluded, and as such the Milan group was modified to include only patients with AFP <400 ng/ml; these patients were compared to patients beyond Milan, but within TTV/AFP. From January 2007 to March 2013, 233 patients with HCC were listed for liver transplantation. Of them, 195 patients were within Milan, and 38 beyond Milan but within TTV/AFP. The average follow-up from listing was 33,9 ±24,9 months. The risk of drop-out was higher for patients beyond Milan but within TTV/AFP (16/38, 42,1%), than for patients within Milan (49/195, 25,1%, p=0,033). In parallel, intent-to-treat survival from listing was lower in the patients beyond Milan (53,8% vs. 71,6% at four years, p<0,001). After a median waiting time of 8 months, 166 patients were transplanted, 134 patients within Milan criteria, and 32 beyond Milan but within TTV/AFP. They demonstrated acceptable and similar recurrence rates (4,5% vs. 9,4%, p=0,138) and post-transplant survivals (78,7% vs. 74,6% at four years, p=0,932). CONCLUSION Based on the present prospective study, HCC liver transplant candidate selection could be expanded to the TTV (≤115 cm(3) )/AFP (≤400 ng/ml) criteria in centers with at least 8-month waiting time. An increased risk of drop-out on the waiting list can be expected but with equivalent and satisfactory post-transplant survival. This article is protected by copyright. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an overview of the stack-based memory management techniques that we used in our non-deterministic and-parallel Prolog systems: &-Prolog and DASWAM. We believe that the problems associated with non-deterministic and-parallel systems are more general than those encountered in or-parallel and deterministic and-parallel systems, which can be seen as subsets of this more general case. We develop on the previously proposed "marker scheme", lifting some of the restrictions associated with the selection of goals while keeping (virtual) memory consumption down. We also review some of the other problems associated with the stack-based management scheme, such as handling of forward and backward execution, cut, and roll-backs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed restriction fragment length polymorphism map was used to determine the chromosomal locations and subgenomic distributions of quantitative trait loci (QTLs) segregating in a cross between cultivars of allotetraploid (AADD) Gossypium hirsutum (“Upland” cotton) and Gossypium barbadense (“Sea Island,” “Pima,” or “Egyptian” cotton) that differ markedly in the quality and quantity of seed epidermal fibers. Most QTLs influencing fiber quality and yield are located on the “D” subgenome, derived from an ancestor that does not produce spinnable fibers. D subgenome QTLs may partly account for the fact that domestication and breeding of tetraploid cottons has resulted in fiber yield and quality levels superior to those achieved by parallel improvement of “A” genome diploid cottons. The merger of two genomes with different evolutionary histories in a common nucleus appears to offer unique avenues for phenotypic response to selection. This may partly compensate for reduction in quantitative variation associated with polyploid formation and be one basis for the prominence of polyploids among extant angiosperms. These findings impel molecular dissection of the roles of divergent subgenomes in quantitative inheritance in many other polyploids and further exploration of both “synthetic” polyploids and exotic diploid genotypes for agriculturally useful variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the critical role that terrestrial vegetation plays in the Earth's carbon cycle, very little is known about the potential evolutionary responses of plants to anthropogenically induced increases in concentrations of atmospheric CO2. We present experimental evidence that rising CO2 concentration may have a direct impact on the genetic composition and diversity of plant populations but is unlikely to result in selection favoring genotypes that exhibit increased productivity in a CO2-enriched atmosphere. Experimental populations of an annual plant (Abutilon theophrasti, velvetleaf) and a temperate forest tree (Betula alleghaniensis, yellow birch) displayed responses to increased CO2 that were both strongly density-dependent and genotype-specific. In competitive stands, a higher concentration of CO2 resulted in pronounced shifts in genetic composition, even though overall CO2-induced productivity enhancements were small. For the annual species, quantitative estimates of response to selection under competition were 3 times higher at the elevated CO2 level. However, genotypes that displayed the highest growth responses to CO2 when grown in the absence of competition did not have the highest fitness in competitive stands. We suggest that increased CO2 intensified interplant competition and that selection favored genotypes with a greater ability to compete for resources other than CO2. Thus, while increased CO2 may enhance rates of selection in populations of competing plants, it is unlikely to result in the evolution of increased CO2 responsiveness or to operate as an important feedback in the global carbon cycle. However, the increased intensity of selection and drift driven by rising CO2 levels may have an impact on the genetic diversity in plant populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanisms of speciation are not well understood, despite decades of study. Recent work has focused on how natural and sexual selection cause sexual isolation. Here, we investigate the roles of divergent natural and sexual selection in the evolution of sexual isolation between sympatric species of threespine sticklebacks. We test the importance of morphological and behavioral traits in conferring sexual isolation and examine to what extent these traits have diverged in parallel between multiple, independently evolved species pairs. We use the patterns of evolution in ecological and mating traits to infer the likely nature of selection on sexual isolation. Strong parallel evolution implicates ecologically based divergent natural and/or sexual selection, whereas arbitrary directionality implicates nonecological sexual selection or drift. In multiple pairs we find that sexual isolation arises in the same way: assortative mating on body size and asymmetric isolation due to male nuptial color. Body size and color have diverged in a strongly parallel manner, similar to ecological traits. The data implicate ecologically based divergent natural and sexual selection as engines of speciation in this group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mating preferences are common in natural populations, and their divergence among populations is considered an important source of reproductive isolation during speciation. Although mechanisms for the divergence of mating preferences have received substantial theoretical treatment, complementary experimental tests are lacking. We conducted a laboratory evolution experiment, using the fruit fly Drosophila serrata, to explore the role of divergent selection between environments in the evolution of female mating preferences. Replicate populations of D. serrata were derived from a common ancestor and propagated in one of three resource environments: two novel environments and the ancestral laboratory environment. Adaptation to both novel environments involved changes in cuticular hydrocarbons, traits that predict mating success in these populations. Furthermore, female mating preferences for these cuticular hydrocarbons also diverged among populations. A component of this divergence occurred among treatment environments, accounting for at least 17.4% of the among- population divergence in linear mating preferences and 17.2% of the among-population divergence in nonlinear mating preferences. The divergence of mating preferences in correlation with environment is consistent with the classic by- product model of speciation in which premating isolation evolves as a side effect of divergent selection adapting populations to their different environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a novel and potentially important tool for candidate subunit vaccine selection through in silico reverse-vaccinology. A set of Bayesian networks able to make individual predictions for specific subcellular locations is implemented in three pipelines with different architectures: a parallel implementation with a confidence level-based decision engine and two serial implementations with a hierarchical decision structure, one initially rooted by prediction between membrane types and another rooted by soluble versus membrane prediction. The parallel pipeline outperformed the serial pipeline, but took twice as long to execute. The soluble-rooted serial pipeline outperformed the membrane-rooted predictor. Assessment using genomic test sets was more equivocal, as many more predictions are made by the parallel pipeline, yet the serial pipeline identifies 22 more of the 74 proteins of known location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global connectivity, for anyone, at anyplace, at anytime, to provide high-speed, high-quality, and reliable communication channels for mobile devices, is now becoming a reality. The credit mainly goes to the recent technological advances in wireless communications comprised of a wide range of technologies, services, and applications to fulfill the particular needs of end-users in different deployment scenarios (Wi-Fi, WiMAX, and 3G/4G cellular systems). In such a heterogeneous wireless environment, one of the key ingredients to provide efficient ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS) based, are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, and call-dropping and/or blocking probabilities. This research presented a novel approach for the design and implementation of a multi-criteria vertical handoff algorithm for heterogeneous wireless networks. Several parallel Fuzzy Logic Controllers were utilized in combination with different types of ranking algorithms and metric weighting schemes to implement two major modules: the first module estimated the necessity of handoff, and the other module was developed to select the best network as the target of handoff. Simulations based on different traffic classes, utilizing various types of wireless networks were carried out by implementing a wireless test-bed inspired by the concept of Rudimentary Network Emulator (RUNE). Simulation results indicated that the proposed scheme provided better performance in terms of minimizing the unnecessary handoffs, call dropping, and call blocking and handoff blocking probabilities. When subjected to Conversational traffic and compared against the RSS-based reference algorithm, the proposed scheme, utilizing the FTOPSIS ranking algorithm, was able to reduce the average outage probability of MSs moving with high speeds by 17%, new call blocking probability by 22%, the handoff blocking probability by 16%, and the average handoff rate by 40%. The significant reduction in the resulted handoff rate provides MS with efficient power consumption, and more available battery life. These percentages indicated a higher probability of guaranteed session continuity and quality of the currently utilized service, resulting in higher user satisfaction levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In combination of the advantages of both parallel mechanisms and compliant mechanisms, a compliant parallel mechanism with two rotational DOFs (degrees of freedom) is designed to meet the requirement of a lightweight and compact pan-tilt platform. Firstly, two commonly-used design methods i.e. direct substitution and FACT (Freedom and Constraint Topology) are applied to design the configuration of the pan-tilt system, and similarities and differences of the two design alternatives are compared. Then inverse kinematic analysis of the candidate mechanism is implemented by using the pseudo-rigid-body model (PRBM), and the Jacobian related to its differential kinematics is further derived to help designer realize dynamic analysis of the 8R compliant mechanism. In addition, the mechanism’s maximum stress existing within its workspace is tested by finite element analysis. Finally, a method to determine joint damping of the flexure hinge is presented, which aims at exploring the effect of joint damping on actuator selection and real-time control. To the authors’ knowledge, almost no existing literature concerns with this issue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the instantaneous conditions encountered. A Lyapunov-like analysis is presented demonstrating that time variation in wave impedance will not violate the passivity of the system. Experimental trials, both in simulation and on a haptic feedback device, are presented validating the technique. Consideration is also given to the case of an uncertain environment, in which an a priori impedance choice may not be possible.