951 resultados para Computer simulation, Colloidal systems, Nucleation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Microtine species in Fennoscandia display a distinct north-south gradient from regular cycles to stable populations. The gradient has often been attributed to changes in the interactions between microtines and their predators. Although the spatial structure of the environment is known to influence predator-prey dynamics of a wide range of species, it has scarcely been considered in relation to the Fennoscandian gradient. Furthermore, the length of microtine breeding season also displays a north-south gradient. However, little consideration has been given to its role in shaping or generating population cycles. Because these factors covary along the gradient it is difficult to distinguish their effects experimentally in the field. The distinction is here attempted using realistic agent-based modelling. Methodology/Principal Findings: By using a spatially explicit computer simulation model based on behavioural and ecological data from the field vole (Microtus agrestis), we generated a number of repeated time series of vole densities whose mean population size and amplitude were measured. Subsequently, these time series were subjected to statistical autoregressive modelling, to investigate the effects on vole population dynamics of making predators more specialised, of altering the breeding season, and increasing the level of habitat fragmentation. We found that fragmentation as well as the presence of specialist predators are necessary for the occurrence of population cycles. Habitat fragmentation and predator assembly jointly determined cycle length and amplitude. Length of vole breeding season had little impact on the oscillations. Significance: There is good agreement between our results and the experimental work from Fennoscandia, but our results allow distinction of causation that is hard to unravel in field experiments. We hope our results will help understand the reasons for cycle gradients observed in other areas. Our results clearly demonstrate the importance of landscape fragmentation for population cycling and we recommend that the degree of fragmentation be more fully considered in future analyses of vole dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Small propagules like pollen or fungal spores may be dispersed by the wind over distances of hundreds or thousands of kilometres,even though the median dispersal may be only a few metres. Such long-distance dispersal is a stochastic event which may be exceptionally important in shaping a population. It has been found repeatedly in field studies that subpopulations of wind-dispersed fungal pathogens virulent on cultivars with newly introduced, effective resistance genes are dominated by one or very few genotypes. The role of propagule dispersal distributions with distinct behaviour at long distances in generating this characteristic population structure was studied by computer simulation of dispersal of clonal organisms in a heterogeneous environment with fields of unselective and selective hosts. Power-law distributions generated founder events in which new, virulent genotypes rapidly colonized fields of resistant crop varieties and subsequently dominated the pathogen population on both selective and unselective varieties, in agreement with data on rust and powdery mildew fungi. An exponential dispersal function, with extremely rare dispersal over long distances, resulted in slower colonization of resistant varieties by virulent pathogens or even no colonization if the distance between susceptible source and resistant target fields was sufficiently large. The founder events resulting from long-distance dispersal were highly stochastic and exact quantitative prediction of genotype frequencies will therefore always be difficult.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The feasibility of halving greenhousegasemissions from hotels by 2030 has been studied as part of the Carbon Vision Buildings Programme. The aim of that programme was to study ways of reducing emissions from the existing stock because it will be responsible for the majority of building emissions over the next few decades. The work was carried out using detailed computer simulation using the ESP-r tool. Two hotels were studied, one older and converted and the other newer and purpose-built, with the aim of representing the most common UKhotel types. The effects were studied of interventions expected to be available in 2030 including fabric improvements, HVAC changes, lighting and appliance improvements and renewable energy generation. The main finding was that it is technically feasible to reduce emissions by 50% without compromising guest comfort. Ranking of the interventions was problematical for several reasons including interdependence and the impacts on boiler sizing of large reductions in the heating load

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three topics are discussed. First, an issue in the epistemology of computer simulation - that of the chess endgame 'becoming' what computer-generated data says it is. Secondly, the endgames of the longest known games are discussed, and the concept of a Bionic Game is defined. Lastly, the set of record-depth positions published by Bourzutschky and Konoval are evaluated by the new MVL tables in Moscow - alongside the deepest known mate of 549 moves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The incorporation of cobalt in mixed metal carbonates is a possible route to the immobilization of this toxic element in the environment. However, the thermodynamics of (Ca,Co)CO3 solid solutions are still unclear due to conflicting data from experiment and from the observation of natural ocurrences. We report here the results of a computer simulation study of the mixing of calcite (CaCO3) and spherocobaltite (CoCO3), using density functional theory calculations. Our simulations suggest that previously proposed thermodynamic models, based only on the range of observed compositions, significantly overestimate the solubility between the two solids and therefore underestimate the extension of the miscibility gap under ambient conditions. The enthalpy of mixing of the disordered solid solution is strongly positive and moderately asymmetric: calcium incorporation in spherocobaltite is more endothermic than cobalt incorporation in calcite. Ordering of the impurities in (0001) layers is energetically favourable with respect to the disordered solid solution at low temperatures and intermediate compositions, but the ordered phase is still unstable to demixing. We calculate the solvus and spinodal lines in the phase diagram using a sub-regular solution model, and conclude that many Ca1-xCoxCO3 mineral solid solutions (with observed compositions of up to x=0.027, and above x=0.93) are metastable with respect to phase separation. We also calculate solid/aqueous distribution coefficients to evaluate the effect of the strong non-ideality of mixing on the equilibrium with aqueous solution, showing that the thermodynamically-driven incorporation of cobalt in calcite (and of calcium in spherocobaltite) is always very low, regardless of the Co/Ca ratio of the aqueous environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The near-neutral model of B chromosome evolution predicts that the invasion of a new population should last some tens of generations, but the details on how it proceeds in real populations are mostly unknown. Trying to fill this gap, we analyze here a natural population of the grasshopper Eyprepocnemis plorans at three time points during the last 35 years. Our results show that B chromosome frequency increased significantly during this period, and that a cline observed in 1992 had disappeared in 2012 once B frequency reached an upper limit in all sites sampled. This indicates that, during B chromosome invasion, at microgeographic scale, transient clines for B frequency are formed at the invasion front. Computer simulation experiments showed that the pattern of change observed for genotypic frequencies is consistent with the existence of B chromosome drive through females and selection against individuals with high number of B chromosomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developed countries have an even distribution of published papers on the seventeen model organisms. Developing countries have biased preferences for a few model organisms which are associated with endemic human diseases. A variant of the Hirsch-index, that we call the mean (mo)h-index (""model organism h-index""), shows an exponential relationship with the amount of papers published in each country on the selected model organisms. Developing countries cluster together with low mean (mo)h-indexes, even those with high number of publications. The growth curves of publications on the recent model Caenorhabditis elegans in developed countries shows different formats. We also analyzed the growth curves of indexed publications originating from developing countries. Brazil and South Korea were selected for this comparison. The most prevalent model organisms in those countries show different growth curves when compared to a global analysis, reflecting the size and composition of their research communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a content selection framework that improves the users` experience when they are enriching or authoring pieces of news. This framework combines a variety of techniques to retrieve semantically related videos, based on a set of criteria which are specified automatically depending on the media`s constraints. The combination of different content selection mechanisms can improve the quality of the retrieved scenes, because each technique`s limitations are minimized by other techniques` strengths. We present an evaluation based on a number of experiments, which show that the retrieved results are better when all criteria are used at time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While watching TV, viewers use the remote control to turn the TV set on and off, change channel and volume, to adjust the image and audio settings, etc. Worldwide, research institutes collect information about audience measurement, which can also be used to provide personalization and recommendation services, among others. The interactive digital TV offers viewers the opportunity to interact with interactive applications associated with the broadcast program. Interactive TV infrastructure supports the capture of the user-TV interaction at fine-grained levels. In this paper we propose the capture of all the user interaction with a TV remote control-including short term and instant interactions: we argue that the corresponding captured information can be used to create content pervasively and automatically, and that this content can be used by a wide variety of services, such as audience measurement, personalization and recommendation services. The capture of fine grained data about instant and interval-based interactions also allows the underlying infrastructure to offer services at the same scale, such as annotation services and adaptative applications. We present the main modules of an infrastructure for TV-based services, along with a detailed example of a document used to record the user-remote control interaction. Our approach is evaluated by means of a proof-of-concept prototype which uses the Brazilian Digital TV System, the Ginga-NCL middleware.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new technique and two algorithms to bulk-load data into multi-way dynamic metric access methods, based on the covering radius of representative elements employed to organize data in hierarchical data structures. The proposed algorithms are sample-based, and they always build a valid and height-balanced tree. We compare the proposed algorithm with existing ones, showing the behavior to bulk-load data into the Slim-tree metric access method. After having identified the worst case of our first algorithm, we describe adequate counteractions in an elegant way creating the second algorithm. Experiments performed to evaluate their performance show that our bulk-loading methods build trees faster than the sequential insertion method regarding construction time, and that it also significantly improves search performance. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.