938 resultados para Java (Programming language of computer)
Resumo:
The Robocup Rescue Simulation System (RCRSS) is a dynamic system of multi-agent interaction, simulating a large-scale urban disaster scenario. Teams of rescue agents are charged with the tasks of minimizing civilian casualties and infrastructure damage while competing against limitations on time, communication, and awareness. This thesis provides the first known attempt of applying Genetic Programming (GP) to the development of behaviours necessary to perform well in the RCRSS. Specifically, this thesis studies the suitability of GP to evolve the operational behaviours required of each type of rescue agent in the RCRSS. The system developed is evaluated in terms of the consistency with which expected solutions are the target of convergence as well as by comparison to previous competition results. The results indicate that GP is capable of converging to some forms of expected behaviour, but that additional evolution in strategizing behaviours must be performed in order to become competitive. An enhancement to the standard GP algorithm is proposed which is shown to simplify the initial search space allowing evolution to occur much quicker. In addition, two forms of population are employed and compared in terms of their apparent effects on the evolution of control structures for intelligent rescue agents. The first is a single population in which each individual is comprised of three distinct trees for the respective control of three types of agents, the second is a set of three co-evolving subpopulations one for each type of agent. Multiple populations of cooperating individuals appear to achieve higher proficiencies in training, but testing on unseen instances raises the issue of overfitting.
Resumo:
Three dimensional model design is a well-known and studied field, with numerous real-world applications. However, the manual construction of these models can often be time-consuming to the average user, despite the advantages o ffered through computational advances. This thesis presents an approach to the design of 3D structures using evolutionary computation and L-systems, which involves the automated production of such designs using a strict set of fitness functions. These functions focus on the geometric properties of the models produced, as well as their quantifiable aesthetic value - a topic which has not been widely investigated with respect to 3D models. New extensions to existing aesthetic measures are discussed and implemented in the presented system in order to produce designs which are visually pleasing. The system itself facilitates the construction of models requiring minimal user initialization and no user-based feedback throughout the evolutionary cycle. The genetic programming evolved models are shown to satisfy multiple criteria, conveying a relationship between their assigned aesthetic value and their perceived aesthetic value. Exploration into the applicability and e ffectiveness of a multi-objective approach to the problem is also presented, with a focus on both performance and visual results. Although subjective, these results o er insight into future applications and study in the fi eld of computational aesthetics and automated structure design.
Resumo:
This thesis describes research in which genetic programming is used to automatically evolve shape grammars that construct three dimensional models of possible external building architectures. A completely automated fitness function is used, which evaluates the three dimensional building models according to different geometric properties such as surface normals, height, building footprint, and more. In order to evaluate the buildings on the different criteria, a multi-objective fitness function is used. The results obtained from the automated system were successful in satisfying the multiple objective criteria as well as creating interesting and unique designs that a human-aided system might not discover. In this study of evolutionary design, the architectures created are not meant to be fully functional and structurally sound blueprints for constructing a building, but are meant to be inspirational ideas for possible architectural designs. The evolved models are applicable for today's architectural industries as well as in the video game and movie industries. Many new avenues for future work have also been discovered and highlighted.
Resumo:
Formal verification of software can be an enormous task. This fact brought some software engineers to claim that formal verification is not feasible in practice. One possible method of supporting the verification process is a programming language that provides powerful abstraction mechanisms combined with intensive reuse of code. In this thesis we present a strongly typed functional object-oriented programming language. This language features type operators of arbitrary kind corresponding to so-called type protocols. Sub classing and inheritance is based on higher-order matching, i.e., utilizes type protocols as basic tool for reuse of code. We define the operational and axiomatic semantics of this language formally. The latter is the basis of the interactive proof assistant VOOP (Verified Object-Oriented Programs) that allows the user to prove equational properties of programs interactively.
Resumo:
Passive solar building design is the process of designing a building while considering sunlight exposure for receiving heat in winter and rejecting heat in summer. The main goal of a passive solar building design is to remove or reduce the need of mechanical and electrical systems for cooling and heating, and therefore saving energy costs and reducing environmental impact. This research will use evolutionary computation to design passive solar buildings. Evolutionary design is used in many research projects to build 3D models for structures automatically. In this research, we use a mixture of split grammar and string-rewriting for generating new 3D structures. To evaluate energy costs, the EnergyPlus system is used. This is a comprehensive building energy simulation system, which will be used alongside the genetic programming system. In addition, genetic programming will also consider other design and geometry characteristics of the building as search objectives, for example, window placement, building shape, size, and complexity. In passive solar designs, reducing energy that is needed for cooling and heating are two objectives of interest. Experiments show that smaller buildings with no windows and skylights are the most energy efficient models. Window heat gain is another objective used to encourage models to have windows. In addition, window and volume based objectives are tried. To examine the impact of environment on designs, experiments are run on five different geographic locations. Also, both single floor models and multi-floor models are examined in this research. According to the experiments, solutions from the experiments were consistent with respect to materials, sizes, and appearance, and satisfied problem constraints in all instances.
Resumo:
Interior illumination is a complex problem involving numerous interacting factors. This research applies genetic programming towards problems in illumination design. The Radiance system is used for performing accurate illumination simulations. Radiance accounts for a number of important environmental factors, which we exploit during fitness evaluation. Illumination requirements include local illumination intensity from natural and artificial sources, colour, and uniformity. Evolved solutions incorporate design elements such as artificial lights, room materials, windows, and glass properties. A number of case studies are examined, including many-objective problems involving up to 7 illumination requirements, the design of a decorative wall of lights, and the creation of a stained-glass window for a large public space. Our results show the technical and creative possibilities of applying genetic programming to illumination design.
Resumo:
As a result of mutation in genes, which is a simple change in our DNA, we will have undesirable phenotypes which are known as genetic diseases or disorders. These small changes, which happen frequently, can have extreme results. Understanding and identifying these changes and associating these mutated genes with genetic diseases can play an important role in our health, by making us able to find better diagnosis and therapeutic strategies for these genetic diseases. As a result of years of experiments, there is a vast amount of data regarding human genome and different genetic diseases that they still need to be processed properly to extract useful information. This work is an effort to analyze some useful datasets and to apply different techniques to associate genes with genetic diseases. Two genetic diseases were studied here: Parkinson’s disease and breast cancer. Using genetic programming, we analyzed the complex network around known disease genes of the aforementioned diseases, and based on that we generated a ranking for genes, based on their relevance to these diseases. In order to generate these rankings, centrality measures of all nodes in the complex network surrounding the known disease genes of the given genetic disease were calculated. Using genetic programming, all the nodes were assigned scores based on the similarity of their centrality measures to those of the known disease genes. Obtained results showed that this method is successful at finding these patterns in centrality measures and the highly ranked genes are worthy as good candidate disease genes for being studied. Using standard benchmark tests, we tested our approach against ENDEAVOUR and CIPHER - two well known disease gene ranking frameworks - and we obtained comparable results.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.
Resumo:
Lattice valued fuzziness is more general than crispness or fuzziness based on the unit interval. In this work, we present a query language for a lattice based fuzzy database. We define a Lattice Fuzzy Structured Query Language (LFSQL) taking its membership values from an arbitrary lattice L. LFSQL can handle, manage and represent crisp values, linear ordered membership degrees and also allows membership degrees from lattices with non-comparable values. This gives richer membership degrees, and hence makes LFSQL more flexible than FSQL or SQL. In order to handle vagueness or imprecise information, every entry into an L-fuzzy database is an L-fuzzy set instead of crisp values. All of this makes LFSQL an ideal query language to handle imprecise data where some factors are non-comparable. After defining the syntax of the language formally, we provide its semantics using L-fuzzy sets and relations. The semantics can be used in future work to investigate concepts such as functional dependencies. Last but not least, we present a parser for LFSQL implemented in Haskell.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
La conception de systèmes hétérogènes exige deux étapes importantes, à savoir : la modélisation et la simulation. Habituellement, des simulateurs sont reliés et synchronisés en employant un bus de co-simulation. Les approches courantes ont beaucoup d’inconvénients : elles ne sont pas toujours adaptées aux environnements distribués, le temps d’exécution de simulation peut être très décevant, et chaque simulateur a son propre noyau de simulation. Nous proposons une nouvelle approche qui consiste au développement d’un simulateur compilé multi-langage où chaque modèle peut être décrit en employant différents langages de modélisation tel que SystemC, ESyS.Net ou autres. Chaque modèle contient généralement des modules et des moyens de communications entre eux. Les modules décrivent des fonctionnalités propres à un système souhaité. Leur description est réalisée en utilisant la programmation orientée objet et peut être décrite en utilisant une syntaxe que l’utilisateur aura choisie. Nous proposons ainsi une séparation entre le langage de modélisation et la simulation. Les modèles sont transformés en une même représentation interne qui pourrait être vue comme ensemble d’objets. Notre environnement compile les objets internes en produisant un code unifié au lieu d’utiliser plusieurs langages de modélisation qui ajoutent beaucoup de mécanismes de communications et des informations supplémentaires. Les optimisations peuvent inclure différents mécanismes tels que le regroupement des processus en un seul processus séquentiel tout en respectant la sémantique des modèles. Nous utiliserons deux niveaux d’abstraction soit le « register transfer level » (RTL) et le « transaction level modeling » (TLM). Le RTL permet une modélisation à bas niveau d’abstraction et la communication entre les modules se fait à l’aide de signaux et des signalisations. Le TLM est une modélisation d’une communication transactionnelle à un plus haut niveau d’abstraction. Notre objectif est de supporter ces deux types de simulation, mais en laissant à l’usager le choix du langage de modélisation. De même, nous proposons d’utiliser un seul noyau au lieu de plusieurs et d’enlever le bus de co-simulation pour accélérer le temps de simulation.
Resumo:
Dans le but d’optimiser la représentation en mémoire des enregistrements Scheme dans le compilateur Gambit, nous avons introduit dans celui-ci un système d’annotations de type et des vecteurs contenant une représentation abrégée des enregistrements. Ces derniers omettent la référence vers le descripteur de type et l’entête habituellement présents sur chaque enregistrement et utilisent plutôt un arbre de typage couvrant toute la mémoire pour retrouver le vecteur contenant une référence. L’implémentation de ces nouvelles fonctionnalités se fait par le biais de changements au runtime de Gambit. Nous introduisons de nouvelles primitives au langage et modifions l’architecture existante pour gérer correctement les nouveaux types de données. On doit modifier le garbage collector pour prendre en compte des enregistrements contenants des valeurs hétérogènes à alignements irréguliers, et l’existence de références contenues dans d’autres objets. La gestion de l’arbre de typage doit aussi être faite automatiquement. Nous conduisons ensuite une série de tests de performance visant à déterminer si des gains sont possibles avec ces nouvelles primitives. On constate une amélioration majeure de performance au niveau de l’allocation et du comportement du gc pour les enregistrements typés de grande taille et des vecteurs d’enregistrements typés ou non. De légers surcoûts sont toutefois encourus lors des accès aux champs et, dans le cas des vecteurs d’enregistrements, au descripteur de type.
Resumo:
This work aims to study the variation in subduction zone geometry along and across the arc and the fault pattern within the subducting plate. Depth of penetration as well as the dip of the Benioff zone varies considerably along the arc which corresponds to the curvature of the fold- thrust belt which varies from concave to convex in different sectors of the arc. The entire arc is divided into 27 segments and depth sections thus prepared are utilized to investigate the average dip of the Benioff zone in the different parts of the entire arc, penetration depth of the subducting lithosphere, the subduction zone geometry underlying the trench, the arctrench gap, etc.The study also describes how different seismogenic sources are identified in the region, estimation of moment release rate and deformation pattern. The region is divided into broad seismogenic belts. Based on these previous studies and seismicity Pattern, we identified several broad distinct seismogenic belts/sources. These are l) the Outer arc region consisting of Andaman-Nicobar islands 2) the back-arc Andaman Sea 3)The Sumatran fault zone(SFZ)4)Java onshore region termed as Jave Fault Zone(JFZ)5)Sumatran fore arc silver plate consisting of Mentawai fault(MFZ)6) The offshore java fore arc region 7)The Sunda Strait region.As the Seismicity is variable,it is difficult to demarcate individual seismogenic sources.Hence, we employed a moving window method having a window length of 3—4° and with 50% overlapping starting from one end to the other. We succeeded in defining 4 sources each in the Andaman fore arc and Back arc region, 9 such sources (moving windows) in the Sumatran Fault zone (SFZ), 9 sources in the offshore SFZ region and 7 sources in the offshore Java region. Because of the low seismicity along JFZ, it is separated into three seismogenic sources namely West Java, Central Java and East Java. The Sunda strait is considered as a single seismogenic source.The deformation rates for each of the seismogenic zones have been computed. A detailed error analysis of velocity tensors using Monte—Carlo simulation method has been carried out in order to obtain uncertainties. The eigen values and the respective eigen vectors of the velocity tensor are computed to analyze the actual deformation pattem for different zones. The results obtained have been discussed in the light of regional tectonics, and their implications in terms of geodynamics have been enumerated.ln the light of recent major earthquakes (26th December 2004 and 28th March 2005 events) and the ongoing seismic activity, we have recalculated the variation in the crustal deformation rates prior and after these earthquakes in Andaman—Sumatra region including the data up to 2005 and the significant results has been presented.ln this chapter, the down going lithosphere along the subduction zone is modeled using the free air gravity data by taking into consideration the thickness of the crustal layer, the thickness of the subducting slab, sediment thickness, presence of volcanism, the proximity of the continental crust etc. Here a systematic and detailed gravity interpretation constrained by seismicity and seismic data in the Andaman arc and the Andaman Sea region in order to delineate the crustal structure and density heterogeneities a Io nagnd across the arc and its correlation with the seismogenic behaviour is presented.
Resumo:
This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised