61 resultados para Non-dominated sorting genetic algorithms
Resumo:
The method of entropy has been useful in evaluating inconsistency on human judgments. This paper illustrates an entropy-based decision support system called e-FDSS to the solution of multicriterion risk and decision analysis in projects of construction small and medium enterprises (SMEs). It is optimized and solved by fuzzy logic, entropy, and genetic algorithms. A case study demonstrated the use of entropy in e-FDSS on analyzing multiple risk criteria in the predevelopment stage of SME projects. Survey data studying the degree of impact of selected project risk criteria on different projects were input into the system in order to evaluate the preidentified project risks in an impartial environment. Without taking into account the amount of uncertainty embedded in the evaluation process; the results showed that all decision vectors are indeed full of bias and the deviations of decisions are finally quantified providing a more objective decision and risk assessment profile to the stakeholders of projects in order to search and screen the most profitable projects.
Resumo:
In this study, differences at the genetic level of 37 Salmonella Enteritidis strains from five phage types (PTs) were compared using comparative genomic hybridization (CGH) to assess differences between PTs. There were approximately 400 genes that differentiated prevalent (4, 6, 8 and 13a) and sporadic (11) PTs, of which 35 were unique to prevalent PTs, including six plasmid-borne genes, pefA, B, C, D, srgC and rck, and four chromosomal genes encoding putative amino acid transporters. Phenotype array studies also demonstrated that strains from prevalent PTs were less susceptible to urea stress and utilized L-histidine, L-glutamine, L-proline, L-aspartic acid, gly-asn and gly-gln more efficiently than PT11 strains. Complementation of a PT11 strain with the transporter genes from PT4 resulted in a significant increase in utilization of the amino acids and reduced susceptibility to urea stress. In epithelial cell association assays, PT11 strains were less invasive than other prevalent PTs. Most strains from prevalent PTs were better biofilm formers at 37 degrees C than at 28 degrees C, whilst the converse was true for PT11 strains. Collectively, the results indicate that genetic and corresponding phenotypic differences exist between strains of the prevalent PTs 4, 6, 8 and 13a and non-prevalent PT11 strains that are likely to provide a selective advantage for strains from the former PTs and could help them to enter the food chain and cause salmonellosis.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
It is generally accepted that genetics may be an important factor in explaining the variation between patients’ responses to certain drugs. However, identification and confirmation of the responsible genetic variants is proving to be a challenge in many cases. A number of difficulties that maybe encountered in pursuit of these variants, such as non-replication of a true effect, population structure and selection bias, can be mitigated or at least reduced by appropriate statistical methodology. Another major statistical challenge facing pharmacogenetics studies is trying to detect possibly small polygenic effects using large volumes of genetic data, while controlling the number of false positive signals. Here we review statistical design and analysis options available for investigations of genetic resistance to anti-epileptic drugs.
Resumo:
Genealogical data have been used very widely to construct indices with which to examine the contribution of plant breeding programmes to the maintenance and enhancement of genetic resources. In this paper we use such indices to examine changes in the genetic diversity of the winter wheat crop in England and Wales between 1923 and 1995. We find that, except for one period characterized by the dominance of imported varieties, the genetic diversity of the winter wheat crop has been remarkably stable. This agrees with many studies of plant breeding programmes elsewhere. However, underlying the stability of the winter wheat crop is accelerating varietal turnover without any significant diversification of the genetic resources used. Moreover, the changes we observe are more directly attributable to changes in the varietal shares of the area under winter wheat than to the genealogical relationship between the varieties sown. We argue, therefore, that while genealogical indices reflect how well plant breeders have retained and exploited the resources with which they started, these indices suffer from a critical limitation. They do not reflect the proportion of the available range of genetic resources which has been effectively utilized in the breeding programme: complex crosses of a given set of varieties can yield high indices, and yet disguise the loss (or non-utilization) of a large proportion of the available genetic diversity.
Resumo:
Background: MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results: A large dataset comprising MHC-peptide structural complexes was created by remodelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion: The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.
Resumo:
Distributed computing paradigms for sharing resources such as Clouds, Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. While there are some success stories such as PlanetLab, OneLab, BOINC, BitTorrent, and SETI@home, a widespread use of these technologies for business applications has not yet been achieved. In a business environment, mechanisms are needed to provide incentives to potential users for participating in such networks. These mechanisms may range from simple non-monetary access rights, monetary payments to specific policies for sharing. Although a few models for a framework have been discussed (in the general area of a "Grid Economy"), none of these models has yet been realised in practice. This book attempts to fill this gap by discussing the reasons for such limited take-up and exploring incentive mechanisms for resource sharing in distributed systems. The purpose of this book is to identify research challenges in successfully using and deploying resource sharing strategies in open-source and commercial distributed systems.
Resumo:
In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.
Resumo:
Non-word repetition (NWR) was investigated in adolescents with typical development, Specific Language Impairment (SLI) and Autism Plus language Impairment (ALI) (n = 17, 13, 16, and mean age 14;4, 15;4, 14;8 respectively). The study evaluated the hypothesis that poor NWR performance in both groups indicates an overlapping language phenotype (Kjelgaard & Tager-Flusberg, 2001). Performance was investigated both quantitatively, e.g. overall error rates, and qualitatively, e.g. effect of length on repetition, proportion of errors affecting phonological structure, and proportion of consonant substitutions involving manner changes. Findings were consistent with previous research (Whitehouse, Barry, & Bishop, 2008) demonstrating a greater effect of length in the SLI group than the ALI group, which may be due to greater short-term memory limitations. In addition, an automated count of phoneme errors identified poorer performance in the SLI group than the ALI group. These findings indicate differences in the language profiles of individuals with SLI and ALI, but do not rule out a partial overlap. Errors affecting phonological structure were relatively frequent, accounting for around 40% of phonemic errors, but less frequent than straight Consonant-for-Consonant or vowel-for-vowel substitutions. It is proposed that these two different types of errors may reflect separate contributory mechanisms. Around 50% of consonant substitutions in the clinical groups involved manner changes, suggesting poor auditory-perceptual encoding. From a clinical perspective algorithms which automatically count phoneme errors may enhance sensitivity of NWR as a diagnostic marker of language impairment. Learning outcomes: Readers will be able to (1) describe and evaluate the hypothesis that there is a phenotypic overlap between SLI and Autism Spectrum Disorders (2) describe differences in the NWR performance of adolescents with SLI and ALI, and discuss whether these differences support or refute the phenotypic overlap hypothesis, and (3) understand how computational algorithms such as the Levenshtein Distance may be used to analyse NWR data.
Resumo:
International Perspective The development of GM technology continues to expand into increasing numbers of crops and conferred traits. Inevitably, the focus remains on the major field crops of soybean, maize, cotton, oilseed rape and potato with introduced genes conferring herbicide tolerance and/or pest resistance. Although there are comparatively few GM crops that have been commercialised to date, GM versions of 172 plant species have been grown in field trials in 31 countries. European Crops with Containment Issues Of the 20 main crops in the EU there are four for which GM varieties are commercially available (cotton, maize for animal feed and forage, and oilseed rape). Fourteen have GM varieties in field trials (bread wheat, barley, durum wheat, sunflower, oats, potatoes, sugar beet, grapes, alfalfa, olives, field peas, clover, apples, rice) and two have GM varieties still in development (rye, triticale). Many of these crops have hybridisation potential with wild and weedy relatives in the European flora (bread wheat, barley, oilseed rape, durum wheat, oats, sugar beet and grapes), with escapes (sunflower); and all have potential to cross-pollinate fields non-GM crops. Several fodder crops, forestry trees, grasses and ornamentals have varieties in field trials and these too may hybridise with wild relatives in the European flora (alfalfa, clover, lupin, silver birch, sweet chestnut, Norway spruce, Scots pine, poplar, elm, Agrostis canina, A. stolonifera, Festuca arundinacea, Lolium perenne, L. multiflorum, statice and rose). All these crops will require containment strategies to be in place if it is deemed necessary to prevent transgene movement to wild relatives and non-GM crops. Current Containment Strategies A wide variety of GM containment strategies are currently under development, with a particular focus on crops expressing pharmaceutical products. Physical containment in greenhouses and growth rooms is suitable for some crops (tomatoes, lettuce) and for research purposes. Aquatic bioreactors of some non-crop species (algae, moss, and duckweed) expressing pharmaceutical products have been adopted by some biotechnology companies. There are obvious limitations of the scale of physical containment strategies, addressed in part by the development of large underground facilities in the US and Canada. The additional resources required to grow plants underground incurs high costs that in the long term may negate any advantage of GM for commercial productioNatural genetic containment has been adopted by some companies through the selection of either non-food/feed crops (algae, moss, duckweed) as bio-pharming platforms or organisms with no wild relatives present in the local flora (safflower in the Americas). The expression of pharmaceutical products in leafy crops (tobacco, alfalfa, lettuce, spinach) enables growth and harvesting prior to and in the absence of flowering. Transgenically controlled containment strategies range in their approach and degree of development. Plastid transformation is relatively well developed but is not suited to all traits or crops and does not offer complete containment. Male sterility is well developed across a range of plants but has limitations in its application for fruit/seed bearing crops. It has been adopted in some commercial lines of oilseed rape despite not preventing escape via seed. Conditional lethality can be used to prevent flowering or seed development following the application of a chemical inducer, but requires 100% induction of the trait and sufficient application of the inducer to all plants. Equally, inducible expression of the GM trait requires equally stringent application conditions. Such a method will contain the trait but will allow the escape of a non-functioning transgene. Seed lethality (‘terminator’ technology) is the only strategy at present that prevents transgene movement via seed, but due to public opinion against the concept it has never been trialled in the field and is no longer under commercial development. Methods to control flowering and fruit development such as apomixis and cleistogamy will prevent crop-to-wild and wild-to-crop pollination, but in nature both of these strategies are complex and leaky. None of the genes controlling these traits have as yet been identified or characterised and therefore have not been transgenically introduced into crop species. Neither of these strategies will prevent transgene escape via seed and any feral apomicts that form are arguably more likely to become invasives. Transgene mitigation reduces the fitness of initial hybrids and so prevents stable introgression of transgenes into wild populations. However, it does not prevent initial formation of hybrids or spread to non-GM crops. Such strategies could be detrimental to wild populations and have not yet been demonstrated in the field. Similarly, auxotrophy prevents persistence of escapes and hybrids containing the transgene in an uncontrolled environment, but does not prevent transgene movement from the crop. Recoverable block of function, intein trans-splicing and transgene excision all use recombinases to modify the transgene in planta either to induce expression or to prevent it. All require optimal conditions and 100% accuracy to function and none have been tested under field conditions as yet. All will contain the GM trait but all will allow some non-native DNA to escape to wild populations or to non-GM crops. There are particular issues with GM trees and grasses as both are largely undomesticated, wind pollinated and perennial, thus providing many opportunities for hybridisation. Some species of both trees and grass are also capable of vegetative propagation without sexual reproduction. There are additional concerns regarding the weedy nature of many grass species and the long-term stability of GM traits across the life span of trees. Transgene stability and conferred sterility are difficult to trial in trees as most field trials are only conducted during the juvenile phase of tree growth. Bio-pharming of pharmaceutical and industrial compounds in plants Bio-pharming of pharmaceutical and industrial compounds in plants offers an attractive alternative to mammalian-based pharmaceutical and vaccine production. Several plantbased products are already on the market (Prodigene’s avidin, β-glucuronidase, trypsin generated in GM maize; Ventria’s lactoferrin generated in GM rice). Numerous products are in clinical trials (collagen, antibodies against tooth decay and non-Hodgkin’s lymphoma from tobacco; human gastric lipase, therapeutic enzymes, dietary supplements from maize; Hepatitis B and Norwalk virus vaccines from potato; rabies vaccines from spinach; dietary supplements from Arabidopsis). The initial production platforms for plant-based pharmaceuticals were selected from conventional crops, largely because an established knowledge base already existed. Tobacco and other leafy crops such as alfalfa, lettuce and spinach are widely used as leaves can be harvested and no flowering is required. Many of these crops can be grown in contained greenhouses. Potato is also widely used and can also be grown in contained conditions. The introduction of morphological markers may aid in the recognition and traceability of crops expressing pharmaceutical products. Plant cells or plant parts may be transformed and maintained in culture to produce recombinant products in a contained environment. Plant cells in suspension or in vitro, roots, root cells and guttation fluid from leaves may be engineered to secrete proteins that may be harvested in a continuous, non-destructive manner. Most strategies in this category remain developmental and have not been commercially adopted at present. Transient expression produces GM compounds from non-GM plants via the utilisation of bacterial or viral vectors. These vectors introduce the trait into specific tissues of whole plants or plant parts, but do not insert them into the heritable genome. There are some limitations of scale and the field release of such crops will require the regulation of the vector. However, several companies have several transiently expressed products in clinical and pre-clinical trials from crops raised in physical containment.
Resumo:
Although the independence of the association and causality has not been fully established, non-fasting (postprandial) triglyceride (TG) concentrations have emerged as a clinically significant cardiovascular disease (CVD) risk factor. In the current review, findings from three insightful prospective studies in the area, namely the Women's Health Study, the Copenhagen City Heart Study and the Norwegian Counties Study, are discussed. An overview is provided as to the likely etiological basis for the association between postprandial TG and CVD, with a focus on both lipid and non-lipid (inflammation, hemostasis and vascular function) risk factors. The impact of various lifestyle and physiological determinants are considered, in particular genetic variation and meal fat composition. Furthermore, although data is limited some information is provided as to the relative and interactive impact of a number of modulators of lipemia. It is evident that relative to age, gender and body mass index (known modulators of postprandial lipemia), the contribution of identified gene variants to the heterogeneity observed in the postprandial response is likely to be relatively small. Finally, we highlight the need for the development of a standardised ‘fat tolerance test’ for use in clinical trials, to allow the integration and comparison of data from individual studies
Resumo:
We explicitly tested for the first time the ‘environmental specificity’ of traditional 16S rRNAtargeted fluorescence in situ hybridization (FISH) through comparison of the bacterial diversity actually targeted in the environment with the diversity that should be exactly targeted (i.e. without mismatches) according to in silico analysis. To do this, we exploited advances in modern Flow Cytometry that enabled improved detection and therefore sorting of sub-micron-sized particles and used probe PSE1284 (designed to target Pseudomonads) applied to Lolium perenne rhizosphere soil as our test system. The 6-carboxyfluorescein (6-FAM)-PSE1284-hybridised population, defined as displaying enhanced green fluorescence in Flow Cytometry, represented 3.51±1.28% of the total detected population when corrected using a nonsense (NON-EUB338) probe control. Analysis of 16S rRNA gene libraries constructed from Fluorescence Activated Cell Sorted (FACS) -recovered fluorescent populations (n=3), revealed that 98.5% (Pseudomonas spp. comprised 68.7% and Burkholderia spp. 29.8%) of the total sorted population was specifically targeted as evidenced by the homology of the 16S rRNA sequences to the probe sequence. In silico evaluation of probe PSE1284 with the use of RDP-10 probeMatch justified the existence of Burkholderia spp. among the sorted cells. The lack of novelty in Pseudomonas spp. sequences uncovered was notable, probably reflecting the well-studied nature of this functionally important genus. To judge the diversity recorded within the FACS-sorted population, rarefaction and DGGE analysis were used to evaluate, respectively, the proportion of Pseudomonas diversity uncovered by the sequencing effort and the representativeness of the Nycodenz® method for the extraction of bacterial cells from soil.
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
Spiking neural networks are usually limited in their applications due to their complex mathematical models and the lack of intuitive learning algorithms. In this paper, a simpler, novel neural network derived from a leaky integrate and fire neuron model, the ‘cavalcade’ neuron, is presented. A simulation for the neural network has been developed and two basic learning algorithms implemented within the environment. These algorithms successfully learn some basic temporal and instantaneous problems. Inspiration for neural network structures from these experiments are then taken and applied to process sensor information so as to successfully control a mobile robot.