897 resultados para Genetic Algorithms, Multi-Objective, Pareto Ranking, Sum of Ranks, Hub Location Problem, Weighted Sum
Resumo:
Indirect and direct models of sexual selection make different predictions regarding the quantitative genetic relationships between sexual ornaments and fitness. Indirect models predict that ornaments should have a high heritability and that strong positive genetic covariance should exist between fitness and the ornament. Direct models, on the other hand, make no such assumptions about the level of genetic variance in fitness and the ornament, and are therefore likely to be more important when environmental sources of variation are large. Here we test these predictions in a wild population of the blue tit (Parus caeruleus), a species in which plumage coloration has been shown to be under sexual selection. Using 3 years of cross-fostering data from over 250 breeding attempts, we partition the covariance between parental coloration and aspects of nestling fitness into a genetic and environmental component. Contrary to indirect models of sexual selection, but in agreement with direct models, we show that variation in coloration is only weakly heritable (h(2) < 0.11), and that two components of offspring fitness-nestling size and fledgling recruitment-are strongly dependent on parental effects, rather than genetic effects. Furthermore, there was no evidence of significant positive genetic covariation between parental colour and offspring traits. Contrary to direct benefit models, however, we find little evidence that variation in colour reliably indicates the level of parental care provided by either males or females. Taken together, these results indicate that the assumptions of indirect models of sexual selection are not supported by the genetic basis of the traits reported on here.
Resumo:
Mitochondrial DNA (mtDNA) is one of the most Popular population genetic markers. Its relevance as an indicator Of Population size and history has recently been questioned by several large-scale studies in animals reporting evidence for recurrent adaptive evolution, at least in invertebrates. Here we focus on mammals, a more restricted taxonomic group for which the issue of mtDNA near neutrality is crucial. By analyzing the distribution of mtDNA diversity across species and relating 4 to allozyme diversity, life-history traits, and taxonomy, we show that (i) mtDNA in mammals (toes not reject the nearly neutral model; (ii) mtDNA diversity, however, is unrelated to any of the 14 life-history and ecological variables that we analyzed, including body mass, geographic range, and The World Conservation Union (IUCN) categorization; (iii) mtDNA diversity is highly variable between mammalian orders and families; (iv) this taxonomic effect is most likely explained by variations of mutation rate between lineages. These results are indicative of a strong stochasticity of effective population size in mammalian species. They Suggest that, even in the absence of selection, mtDNA genetic diversity is essentially unpredictable, knowing species biology, and probably uncorrelated to species abundance.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.
Classification of lactose and mandelic acid THz spectra using subspace and wavelet-packet algorithms
Resumo:
This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.
Resumo:
In this work we study the computational complexity of a class of grid Monte Carlo algorithms for integral equations. The idea of the algorithms consists in an approximation of the integral equation by a system of algebraic equations. Then the Markov chain iterative Monte Carlo is used to solve the system. The assumption here is that the corresponding Neumann series for the iterative matrix does not necessarily converge or converges slowly. We use a special technique to accelerate the convergence. An estimate of the computational complexity of Monte Carlo algorithm using the considered approach is obtained. The estimate of the complexity is compared with the corresponding quantity for the complexity of the grid-free Monte Carlo algorithm. The conditions under which the class of grid Monte Carlo algorithms is more efficient are given.
Resumo:
The recursive least-squares algorithm with a forgetting factor has been extensively applied and studied for the on-line parameter estimation of linear dynamic systems. This paper explores the use of genetic algorithms to improve the performance of the recursive least-squares algorithm in the parameter estimation of time-varying systems. Simulation results show that the hybrid recursive algorithm (GARLS), combining recursive least-squares with genetic algorithms, can achieve better results than the standard recursive least-squares algorithm using only a forgetting factor.
Resumo:
A self-tuning proportional, integral and derivative control scheme based on genetic algorithms (GAs) is proposed and applied to the control of a real industrial plant. This paper explores the improvement in the parameter estimator, which is an essential part of an adaptive controller, through the hybridization of recursive least-squares algorithms by making use of GAs and the possibility of the application of GAs to the control of industrial processes. Both the simulation results and the experiments on a real plant show that the proposed scheme can be applied effectively.
Resumo:
The method of entropy has been useful in evaluating inconsistency on human judgments. This paper illustrates an entropy-based decision support system called e-FDSS to the solution of multicriterion risk and decision analysis in projects of construction small and medium enterprises (SMEs). It is optimized and solved by fuzzy logic, entropy, and genetic algorithms. A case study demonstrated the use of entropy in e-FDSS on analyzing multiple risk criteria in the predevelopment stage of SME projects. Survey data studying the degree of impact of selected project risk criteria on different projects were input into the system in order to evaluate the preidentified project risks in an impartial environment. Without taking into account the amount of uncertainty embedded in the evaluation process; the results showed that all decision vectors are indeed full of bias and the deviations of decisions are finally quantified providing a more objective decision and risk assessment profile to the stakeholders of projects in order to search and screen the most profitable projects.
Resumo:
In financial decision-making, a number of mathematical models have been developed for financial management in construction. However, optimizing both qualitative and quantitative factors and the semi-structured nature of construction finance optimization problems are key challenges in solving construction finance decisions. The selection of funding schemes by a modified construction loan acquisition model is solved by an adaptive genetic algorithm (AGA) approach. The basic objectives of the model are to optimize the loan and to minimize the interest payments for all projects. Multiple projects being undertaken by a medium-size construction firm in Hong Kong were used as a real case study to demonstrate the application of the model to the borrowing decision problems. A compromise monthly borrowing schedule was finally achieved. The results indicate that Small and Medium Enterprise (SME) Loan Guarantee Scheme (SGS) was first identified as the source of external financing. Selection of sources of funding can then be made to avoid the possibility of financial problems in the firm by classifying qualitative factors into external, interactive and internal types and taking additional qualitative factors including sovereignty, credit ability and networking into consideration. Thus a more accurate, objective and reliable borrowing decision can be provided for the decision-maker to analyse the financial options.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
PURPOSE: Multi-species probiotic preparations have been suggested as having a wide spectrum of application, although few studies have compared their efficacy with that of individual component strains at equal concentrations. We therefore tested the ability of 4 single probiotics and 4 probiotic mixtures to inhibit the urinary tract pathogens Escherichia coli NCTC 9001 and Enterococcus faecalis NCTC 00775. METHODS: We used an agar spot test to test the ability of viable cells to inhibit pathogens, while a broth inhibition assay was used to assess inhibition by cell-free probiotic supernatants in both pH-neutralised and non-neutralised forms. RESULTS: In the agar spot test, all probiotic treatments showed inhibition, L. acidophilus was the most inhibitory single strain against E. faecalis, L. fermentum the most inhibitory against E. coli. A commercially available mixture of 14 strains (Bio-Kult(®)) was the most effective mixture, against E. faecalis, the 3-lactobacillus mixture the most inhibitory against E. coli. Mixtures were not significantly more inhibitory than single strains. In the broth inhibition assays, all probiotic supernatants inhibited both pathogens when pH was not controlled, with only 2 treatments causing inhibition at a neutral pH. CONCLUSIONS: Both viable cells of probiotics and supernatants of probiotic cultures were able to inhibit growth of two urinary tract pathogens. Probiotic mixtures prevented the growth of urinary tract pathogens but were not significantly more inhibitory than single strains. Probiotics appear to produce metabolites that are inhibitory towards urinary tract pathogens. Probiotics display potential to reduce the incidence of urinary tract infections via inhibition of colonisation.
Resumo:
This study puts forward a method to model and simulate the complex system of hospital on the basis of multi-agent technology. The formation of the agents of hospitals with intelligent and coordinative characteristics was designed, the message object was defined, and the model operating mechanism of autonomous activities and coordination mechanism was also designed. In addition, the Ontology library and Norm library etc. were introduced using semiotic method and theory, to enlarge the method of system modelling. Swarm was used to develop the multi-agent based simulation system, which is favorable for making guidelines for hospital's improving it's organization and management, optimizing the working procedure, improving the quality of medical care as well as reducing medical charge costs.