774 resultados para evolutionary computing
Resumo:
Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data caching is one of the techniques widely used in wired and wireless networks to improve data access efficiency. In this paper we explore the possibility of a cooperative caching approach to enhance data access efficiency in mobile cloud computing. The proposed approach is based on cloudlets, one of the architecture designed for mobile cloud computing.
Resumo:
Antimicrobial peptides (AMPs) are humoral innate immune components of fishes that provide protection against pathogenic infections. Histone derived antimicrobial peptides are reported to actively participate in the immune defenses of fishes. Present study deals with identification of putative antimicrobial sequences from the histone H2A of sicklefin chimaera, Neoharriotta pinnata. A 52 amino acid residue termed Harriottin-1, a 40 amino acid Harriottin-2, and a 21 mer Harriottin-3 were identified to possess antimicrobial sequence motif. Physicochemical properties andmolecular structure ofHarriottins are in agreement with the characteristic features of antimicrobial peptides, indicating its potential role in innate immunity of sicklefin chimaera. The histone H2A sequence of sicklefin chimera was found to differ from previously reported histone H2A sequences. Phylogenetic analysis based on histone H2A and cytochrome oxidase subunit-1 (CO1) gene revealed N. pinnata to occupy an intermediate position with respect to invertebrates and vertebrates
Resumo:
The median (antimedian) set of a profile π = (u1, . . . , uk) of vertices of a graphG is the set of vertices x that minimize (maximize) the remoteness i d(x,ui ). Two algorithms for median graphs G of complexity O(nidim(G)) are designed, where n is the order and idim(G) the isometric dimension of G. The first algorithm computes median sets of profiles and will be in practice often faster than the other algorithm which in addition computes antimedian sets and remoteness functions and works in all partial cubes
Resumo:
Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.
Resumo:
Der Beitrag beschreibt die Ein- und Durchführung einer Server-basierten Computerinfrastruktur in einer Universitätsbibliothek. Beschrieben wird das so genannte MetaFrame-DV-Konzept der Universitätsbibliothek Kassel, das das dortige Informationsmanagement in den letzten vier Jahren initiiert, konzipiert und umgesetzt hat. Hierbei werden nunmehr nicht mehr nur Applikationsserver z.B. für das CD-Angebot eingesetzt, sondern sämtliche ca. 200 Mitarbeiter- und Funktionsarbeitsplätze über eine Citrix MetaFrame-Installation serverseitig betreut. Besonderes Augenmerk gilt in diesem Beitrag der Konfiguration, der praktischen Administration und den täglichen Arbeitsbedingungen an den Bibliotheksmitarbeiterarbeitsplätzen.
Resumo:
Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.
Resumo:
Let E be a number field and G be a finite group. Let A be any O_E-order of full rank in the group algebra E[G] and X be a (left) A-lattice. We give a necessary and sufficient condition for X to be free of given rank d over A. In the case that the Wedderburn decomposition E[G] \cong \oplus_xM_x is explicitly computable and each M_x is in fact a matrix ring over a field, this leads to an algorithm that either gives elements \alpha_1,...,\alpha_d \in X such that X = A\alpha_1 \oplus ... \oplusA\alpha_d or determines that no such elements exist. Let L/K be a finite Galois extension of number fields with Galois group G such that E is a subfield of K and put d = [K : E]. The algorithm can be applied to certain Galois modules that arise naturally in this situation. For example, one can take X to be O_L, the ring of algebraic integers of L, and A to be the associated order A(E[G];O_L) \subseteq E[G]. The application of the algorithm to this special situation is implemented in Magma under certain extra hypotheses when K = E = \IQ.
Resumo:
Vorgestellt wird eine weltweit neue Methode, Schnittstellen zwischen Menschen und Maschinen für individuelle Bediener anzupassen. Durch Anwenden von Abstraktionen evolutionärer Mechanismen wie Selektion, Rekombination und Mutation in der EOGUI-Methodik (Evolutionary Optimization of Graphical User Interfaces) kann eine rechnergestützte Umsetzung der Methode für Graphische Bedienoberflächen, insbesondere für industrielle Prozesse, bereitgestellt werden. In die Evolutionäre Optimierung fließen sowohl die objektiven, d.h. messbaren Größen wie Auswahlhäufigkeiten und -zeiten, mit ein, als auch das anhand von Online-Fragebögen erfasste subjektive Empfinden der Bediener. Auf diese Weise wird die Visualisierung von Systemen den Bedürfnissen und Präferenzen einzelner Bedienern angepasst. Im Rahmen dieser Arbeit kann der Bediener aus vier Bedienoberflächen unterschiedlicher Abstraktionsgrade für den Beispielprozess MIPS ( MIschungsProzess-Simulation) die Objekte auswählen, die ihn bei der Prozessführung am besten unterstützen. Über den EOGUI-Algorithmus werden diese Objekte ausgewählt, ggf. verändert und in einer neuen, dem Bediener angepassten graphischen Bedienoberfläche zusammengefasst. Unter Verwendung des MIPS-Prozesses wurden Experimente mit der EOGUI-Methodik durchgeführt, um die Anwendbarkeit, Akzeptanz und Wirksamkeit der Methode für die Führung industrieller Prozesse zu überprüfen. Anhand der Untersuchungen kann zu großen Teilen gezeigt werden, dass die entwickelte Methodik zur Evolutionären Optimierung von Mensch-Maschine-Schnittstellen industrielle Prozessvisualisierungen tatsächlich an den einzelnen Bediener anpaßt und die Prozessführung verbessert.
Resumo:
Summary: Recent research on the evolution of language and verbal displays (e.g., Miller, 1999, 2000a, 2000b, 2002) indicated that language is not only the result of natural selection but serves as a sexually-selected fitness indicator that is an adaptation showing an individual’s suitability as a reproductive mate. Thus, language could be placed within the framework of concepts such as the handicap principle (Zahavi, 1975). There are several reasons for this position: Many linguistic traits are highly heritable (Stromswold, 2001, 2005), while naturally-selected traits are only marginally heritable (Miller, 2000a); men are more prone to verbal displays than women, who in turn judge the displays (Dunbar, 1996; Locke & Bogin, 2006; Lange, in press; Miller, 2000a; Rosenberg & Tunney, 2008); verbal proficiency universally raises especially male status (Brown, 1991); many linguistic features are handicaps (Miller, 2000a) in the Zahavian sense; most literature is produced by men at reproduction-relevant age (Miller, 1999). However, neither an experimental study investigating the causal relation between verbal proficiency and attractiveness, nor a study showing a correlation between markers of literary and mating success existed. In the current studies, it was aimed to fill these gaps. In the first one, I conducted a laboratory experiment. Videos in which an actor and an actress performed verbal self-presentations were the stimuli for counter-sex participants. Content was always alike, but the videos differed on three levels of verbal proficiency. Predictions were, among others, that (1) verbal proficiency increases mate value, but that (2) this applies more to male than to female mate value due to assumed past sex-different selection pressures causing women to be very demanding in mate choice (Trivers, 1972). After running a two-factorial analysis of variance with the variables sex and verbal proficiency as factors, the first hypothesis was supported with high effect size. For the second hypothesis, there was only a trend going in the predicted direction. Furthermore, it became evident that verbal proficiency affects long-term more than short-term mate value. In the second study, verbal proficiency as a menstrual cycle-dependent mate choice criterion was investigated. Basically the same materials as in the former study were used with only marginal changes in the used questionnaire. The hypothesis was that fertile women rate high verbal proficiency in men higher than non-fertile women because of verbal proficiency being a potential indicator of “good genes”. However, no significant result could be obtained in support of the hypothesis in the current study. In the third study, the hypotheses were: (1) most literature is produced by men at reproduction-relevant age. (2) The more works of high literary quality a male writer produces, the more mates and children he has. (3) Lyricists have higher mating success than non-lyric writers because of poetic language being a larger handicap than other forms of language. (4) Writing literature increases a man’s status insofar that his offspring shows a significantly higher male-to-female sex ratio than in the general population, as the Trivers-Willard hypothesis (Trivers & Willard, 1973) applied to literature predicts. In order to test these hypotheses, two famous literary canons were chosen. Extensive biographical research was conducted on the writers’ mating successes. The first hypothesis was confirmed; the second one, controlling for life age, only for number of mates but not entirely regarding number of children. The latter finding was discussed with respect to, among others, the availability of effective contraception especially in the 20th century. The third hypothesis was not satisfactorily supported. The fourth hypothesis was partially supported. For the 20th century part of the German list, the secondary sex ratio differed with high statistical significance from the ratio assumed to be valid for a general population.
Resumo:
The dynamic power requirement of CMOS circuits is rapidly becoming a major concern in the design of personal information systems and large computers. In this work we present a number of new CMOS logic families, Charge Recovery Logic (CRL) as well as the much improved Split-Level Charge Recovery Logic (SCRL), within which the transfer of charge between the nodes occurs quasistatically. Operating quasistatically, these logic families have an energy dissipation that drops linearly with operating frequency, i.e., their power consumption drops quadratically with operating frequency as opposed to the linear drop of conventional CMOS. The circuit techniques in these new families rely on constructing an explicitly reversible pipelined logic gate, where the information necessary to recover the energy used to compute a value is provided by computing its logical inverse. Information necessary to uncompute the inverse is available from the subsequent inverse logic stage. We demonstrate the low energy operation of SCRL by presenting the results from the testing of the first fully quasistatic 8 x 8 multiplier chip (SCRL-1) employing SCRL circuit techniques.
Resumo:
General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.