975 resultados para solution set mapping


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cauliflower (Brassica oleracea var. botrytis) is a vernalization-responsive crop. High ambient temperatures delay harvest time. The elucidation of the genetic regulation of floral transition is highly interesting for a precise harvest scheduling and to ensure stable market supply. This study aims at genetic dissection of temperature-dependent curd induction in cauliflower by genome-wide association studies and gene expression analysis. To assess temperature dependent curd induction, two greenhouse trials under distinct temperature regimes were conducted on a diversity panel consisting of 111 cauliflower commercial parent lines, genotyped with 14,385 SNPs. Broad phenotypic variation and high heritability (0.93) were observed for temperature-related curd induction within the cauliflower population. GWA mapping identified a total of 18 QTL localized on chromosomes O1, O2, O3, O4, O6, O8, and O9 for curding time under two distinct temperature regimes. Among those, several QTL are localized within regions of promising candidate flowering genes. Inferring population structure and genetic relatedness among the diversity set assigned three main genetic clusters. Linkage disequilibrium (LD) patterns estimated global LD extent of r(2) = 0.06 and a maximum physical distance of 400 kb for genetic linkage. Transcriptional profiling of flowering genes FLOWERING LOCUS C (BoFLC) and VERNALIZATION 2 (BoVRN2) was performed, showing increased expression levels of BoVRN2 in genotypes with faster curding. However, functional relevance of BoVRN2 and BoFLC2 could not consistently be supported, which probably suggests to act facultative and/or might evidence for BoVRN2/BoFLC-independent mechanisms in temperature regulated floral transition in cauliflower. Genetic insights in temperature-regulated curd induction can underpin genetically informed phenology models and benefit molecular breeding strategies toward the development of thermo-tolerant cultivars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Calluna vulgaris is one of the most important landscaping plants produced in Germany. Its enormous economic success is due to the prolonged flower attractiveness of mutants in flower morphology, the so-called bud-bloomers. In this study, we present the first genetic linkage map of C. vulgaris in which we mapped a locus of the economically highly desired trait " flower type" .Results: The map was constructed in JoinMap 4.1. using 535 AFLP markers from a single mapping population. A large fraction (40%) of markers showed distorted segregation. To test the effect of segregation distortion on linkage estimation, these markers were sorted regarding their segregation ratio and added in groups to the data set. The plausibility of group formation was evaluated by comparison of the " two-way pseudo-testcross" and the " integrated" mapping approach. Furthermore, regression mapping was compared to the multipoint-likelihood algorithm. The majority of maps constructed by different combinations of these methods consisted of eight linkage groups corresponding to the chromosome number of C. vulgaris.Conclusions: All maps confirmed the independent inheritance of the most important horticultural traits " flower type" , " flower colour" , and " leaf colour". An AFLP marker for the most important breeding target " flower type" was identified. The presented genetic map of C. vulgaris can now serve as a basis for further molecular marker selection and map-based cloning of the candidate gene encoding the unique flower architecture of C. vulgaris bud-bloomers. © 2013 Behrend et al.; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this paper we deal with the problem of obtaining the set of k-additive measures dominating a fuzzy measure. This problem extends the problem of deriving the set of probabilities dominating a fuzzy measure, an important problem appearing in Decision Making and Game Theory. The solution proposed in the paper follows the line developed by Chateauneuf and Jaffray for dominating probabilities and continued by Miranda et al. for dominating k-additive belief functions. Here, we address the general case transforming the problem into a similar one such that the involved set functions have non-negative Möbius transform; this simplifies the problem and allows a result similar to the one developed for belief functions. Although the set obtained is very large, we show that the conditions cannot be sharpened. On the other hand, we also show that it is possible to define a more restrictive subset, providing a more natural extension of the result for probabilities, such that it is possible to derive any k-additive dominating measure from it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 17: Risk Analysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To analyze the root canal organic tissue dissolution capacity promoted by irrigating solutions, with or without the use of different agitation techniques. Methods: Bovine pulp tissue fragments were initially weighed. The following irrigating solutions were tested: 2.5% sodium hypochlorite, 2% chlorhexidine digluconate solution, and distilled water. The irrigating protocols were: immersion, mechanical agitation with endodontic files, and ultrasonic or sonic systems (Endoactivactor® and Easy Clean®). At the end of the protocols, the pulps were weighed to determine their final weight. For comparison, the average percentage of tissue dissolution in relation to the groups was analyzed using the Kruskal-Wallis nonparametric test complemented by multiple comparisons test. The significance level was set at 5%. Results: Among the irrigation solutions, 2.5% sodium hypochlorite showed a higher dissolving power than 2% chlorhexidine digluconate and distilled water. Furthermore, ultrasonic and sonic systems were more effective irrigating protocols than immersion and mechanical agitation with endodontic files. Conclusions: The combination of sodium hypochlorite with an agitation system promotes a greater degree of tissue degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Haemoglobins constitute a set of proteins with interesting structural and functional properties, especially when the two large animal groups reptiles and fishes are focused on. Here, the crystallization and preliminary X-ray analysis of haemoglobin-II from the South American fish matrinxa (Brycon cephalus) is reported. X-ray diffraction data have been collected to 3.0 Angstrom resolution using synchrotron radiation (LNLS). Crystals were determined to belong to space group P2(1) and preliminary structural analysis revealed the presence of two tetramers in the asymmetric unit. The structure was determined using the standard molecular-replacement technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding spatial patterns of land use and land cover is essential for studies addressing biodiversity, climate change and environmental modeling as well as for the design and monitoring of land use policies. The aim of this study was to create a detailed map of land use land cover of the deforested areas of the Brazilian Legal Amazon up to 2008. Deforestation data from and uses were mapped with Landsat-5/TM images analysed with techniques, such as linear spectral mixture model, threshold slicing and visual interpretation, aided by temporal information extracted from NDVI MODIS time series. The result is a high spatial resolution of land use and land cover map of the entire Brazilian Legal Amazon for the year 2008 and corresponding calculation of area occupied by different land use classes. The results showed that the four classes of Pasture covered 62% of the deforested areas of the Brazilian Legal Amazon, followed by Secondary Vegetation with 21%. The area occupied by Annual Agriculture covered less than 5% of deforested areas; the remaining areas were distributed among six other land use classes. The maps generated from this project ? called TerraClass - are available at INPE?s web site (http://www.inpe.br/cra/projetos_pesquisas/terraclass2008.php)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Silver and mercury are both dissolved in cyanide leaching and the mercury co-precipitates with silver during metal recovery. Mercury must then be removed from the silver/mercury amalgam by vaporizing the mercury in a retort, leading to environmental and health hazards. The need for retorting silver can be greatly reduced if mercury is selectively removed from leaching solutions. Theoretical calculations were carried out based on the thermodynamics of the Ag/Hg/CN- system in order to determine possible approaches to either preventing mercury dissolution, or selectively precipitating it without silver loss. Preliminary experiments were then carried out based on these calculations to determine if the reaction would be spontaneous with reasonably fast kinetics. In an attempt to stop mercury from dissolving and leaching the heap leach, the first set of experiments were to determine if selenium and mercury would form a mercury selenide under leaching conditions, lowering the amount of mercury in solution while forming a stable compound. From the results of the synthetic ore experiments with selenium, it was determined that another effect was already suppressing mercury dissolution and the effect of the selenium could not be well analyzed on the small amount of change. The effect dominating the reactions led to the second set of experiments in using silver sulfide as a selective precipitant of mercury. The next experiments were to determine if adding solutions containing mercury cyanide to un-leached silver sulfide would facilitate a precipitation reaction, putting silver in solution and precipitating mercury as mercury sulfide. Counter current flow experiments using the high selenium ore showed a 99.8% removal of mercury from solution. As compared to leaching with only cyanide, about 60% of the silver was removed per pass for the high selenium ore, and around 90% for the high mercury ore. Since silver sulfide is rather expensive to use solely as a mercury precipitant, another compound was sought which could selectively precipitate mercury and leave silver in solution. In looking for a more inexpensive selective precipitant, zinc sulfide was tested. The third set of experiments did show that zinc sulfide (as sphalerite) could be used to selectively precipitate mercury while leaving silver cyanide in solution. Parameters such as particle size, reduction potential, and amount of oxidation of the sphalerite were tested. Batch experiments worked well, showing 99.8% mercury removal with only ≈1% silver loss (starting with 930 ppb mercury, 300 ppb silver) at one hour. A continual flow process would work better for industrial applications, which was demonstrated with the filter funnel set up. Funnels with filter paper and sphalerite tested showed good mercury removal (from 31 ppb mercury and 333 ppb silver with a 87% mercury removal and 7% silver loss through one funnel). A counter current flow set up showed 100% mercury removal and under 0.1% silver loss starting with 704 ppb silver and 922 ppb mercury. The resulting sphalerite coated with mercury sulfide was also shown to be stable (not releasing mercury) under leaching tests. Use of sphalerite could be easily implemented through such means as sphalerite impregnated filter paper placed in currently existing processes. In summary, this work focuses on preventing mercury from following silver through the leaching circuit. Currently the only possible means of removing mercury is by retort, creating possible health hazards in the distillation process and in transportation and storage of the final mercury waste product. Preventing mercury from following silver in the earlier stages of the leaching process will greatly reduce the risk of mercury spills, human exposure to mercury, and possible environmental disasters. This will save mining companies millions of dollars from mercury handling and storage, projects to clean up spilled mercury, and will result in better health for those living near and working in the mines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of a map is dependent on the reference dataset used in its construction. Classification analyses used in thematic mapping can, for example, be sensitive to a range of sampling and data quality concerns. With particular focus on the latter, the effects of reference data quality on land cover classifications from airborne thematic mapper data are explored. Variations in sampling intensity and effort are highlighted in a dataset that is widely used in mapping and modelling studies; these may need accounting for in analyses. The quality of the labelling in the reference dataset was also a key variable influencing mapping accuracy. Accuracy varied with the amount and nature of mislabelled training cases with the nature of the effects varying between classifiers. The largest impacts on accuracy occurred when mislabelling involved confusion between similar classes. Accuracy was also typically negatively related to the magnitude of mislabelled cases and the support vector machine (SVM), which has been claimed to be relatively insensitive to training data error, was the most sensitive of the set of classifiers investigated, with overall classification accuracy declining by 8% (significant at 95% level of confidence) with the use of a training set containing 20% mislabelled cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Healthy young adults demonstrate a group-level, systematic preference for stimuli presented in the left side of space relative to the right (‘pseudoneglect’) (Bowers & Heilman, 1980). This results in an overestimation of features such as size, brightness, numerosity and spatial frequency in the left hemispace, probably as a result of right cerebral hemisphere dominance for visuospatial attention. This spatial attention asymmetry is reduced in the healthy older population, and can be shifted entirely into right hemispace under certain conditions. Although this rightward shift has been consistently documented in behavioural experiments, there is very little neuroimaging evidence to explain this effect at a neuroanatomical level. In this thesis, I used behavioural methodology and electroencephalography (EEG) to map spatial attention asymmetries in young and older adults. I then use transcranial direct current stimulation (tDCS) to modulate these spatial biases, with the aim of assessing age-related differences in response to tDCS. In the first of three experiments presented in this thesis, I report in Chapter Two that five different spatial attention tasks provide consistent intra-task measures of spatial bias in young adults across two testing days. There were, however, no inter-task correlations between the five tasks, indicating that pseudoneglect is at least partially driven by task-dependent patterns of neural activity. In Chapter Three, anodal tDCS was applied separately to the left (P5) and right (P6) posterior parietal cortex (PPC) in young and older adults, with an aim to improve the detection of stimuli appearing in the contralateral visual field. There were no age differences in response to tDCS, but there were significant differences depending on baseline performance. Relative to a sham tDCS protocol, tDCS applied to the right PPC resulted in maintained visual detection across both visual fields in adults who were good at the task at baseline. In contrast, left PPC tDCS resulted in reduced detection sensitivity across both visual fields in poor performers. Finally, in Chapter Four, I report a right-hemisphere lateralisation of EEG activity in young adults that was present for long (but not short) landmark task lines. In contrast, older adults demonstrated no lateralised activity for either line length, thus providing novel evidence of an age-related reduction of hemispheric asymmetry in older adults. The results of this thesis provide evidence of a highly complex set of factors that underlie spatial attention asymmetries in healthy young and older adults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The comfort level of the seat has a major effect on the usage of a vehicle; thus, car manufacturers have been working on elevating car seat comfort as much as possible. However, still, the testing and evaluation of comfort are done using exhaustive trial and error testing and evaluation of data. In this thesis, we resort to machine learning and Artificial Neural Networks (ANN) to develop a fully automated approach. Even though this approach has its advantages in minimizing time and using a large set of data, it takes away the degree of freedom of the engineer on making decisions. The focus of this study is on filling the gap in a two-step comfort level evaluation which used pressure mapping with body regions to evaluate the average pressure supported by specific body parts and the Self-Assessment Exam (SAE) questions on evaluation of the person’s interest. This study has created a machine learning algorithm that works on giving a degree of freedom to the engineer in making a decision when mapping pressure values with body regions using ANN. The mapping is done with 92% accuracy and with the help of a Graphical User Interface (GUI) that facilitates the process during the testing time of comfort level evaluation of the car seat, which decreases the duration of the test analysis from days to hours.