20 resultados para Genetic programming (Computer science)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present initial research regarding a system capable of generating novel card games. We furthermore propose a method for com- putationally analysing existing games of the same genre. Ultimately, we present a formalisation of card game rules, and a context-free grammar G cardgame capable of expressing the rules of a large variety of card games. Example derivations are given for the poker variant Texashold?em , Blackjack and UNO. Stochastic simulations are used both to verify the implementation of these well-known games, and to evaluate the results of new game rules derived from the grammar. In future work, this grammar will be used to evolve completely novel card games using a grammar- guided genetic program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In programming languages with dynamic use of memory, such as Java, knowing that a reference variable x points to an acyclic data structure is valuable for the analysis of termination and resource usage (e.g., execution time or memory consumption). For instance, this information guarantees that the depth of the data structure to which x points is greater than the depth of the data structure pointed to by x.f for any field f of x. This, in turn, allows bounding the number of iterations of a loop which traverses the structure by its depth, which is essential in order to prove the termination or infer the resource usage of the loop. The present paper provides an Abstract-Interpretation-based formalization of a static analysis for inferring acyclicity, which works on the reduced product of two abstract domains: reachability, which models the property that the location pointed to by a variable w can be reached by dereferencing another variable v (in this case, v is said to reach w); and cyclicity, modeling the property that v can point to a cyclic data structure. The analysis is proven to be sound and optimal with respect to the chosen abstraction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se estudia la representación, modelado y comparación de colecciones mediante el uso de ontologías en el ámbito de la Web Semántica. Las colecciones, entendidas como agrupaciones de objetos o elementos con entidad propia, son construcciones que aparecen frecuentemente en prácticamente todos los dominios del mundo real, y por tanto, es imprescindible disponer de conceptualizaciones de estas estructuras abstractas y de representaciones de estas conceptualizaciones en los sistemas informáticos, que definan adecuadamente su semántica. Mientras que en muchos ámbitos de la Informática y la Inteligencia Artificial, como por ejemplo la programación, las bases de datos o la recuperación de información, las colecciones han sido ampliamente estudiadas y se han desarrollado representaciones que responden a multitud de conceptualizaciones, en el ámbito de la Web Semántica, sin embargo, su estudio ha sido bastante limitado. De hecho hasta la fecha existen pocas propuestas de representación de colecciones mediante ontologías, y las que hay sólo cubren algunos tipos de colecciones y presentan importantes limitaciones. Esto impide la representación adecuada de colecciones y dificulta otras tareas comunes como la comparación de colecciones, algo crítico en operaciones habituales como las búsquedas semánticas o el enlazado de datos en la Web Semántica. Para solventar este problema esta tesis hace una propuesta de modelización de colecciones basada en una nueva clasificación de colecciones de acuerdo a sus características estructurales (homogeneidad, unicidad, orden y cardinalidad). Esta clasificación permite definir una taxonomía con hasta 16 tipos de colecciones distintas. Entre otras ventajas, esta nueva clasificación permite aprovechar la semántica de las propiedades estructurales de cada tipo de colección para realizar comparaciones utilizando las funciones de similitud y disimilitud más apropiadas. De este modo, la tesis desarrolla además un nuevo catálogo de funciones de similitud para las distintas colecciones, donde se han recogido las funciones de (di)similitud más conocidas y también algunas nuevas. Esta propuesta se ha implementado mediante dos ontologías paralelas, la ontología E-Collections, que representa los distintos tipos de colecciones de la taxonomía y su axiomática, y la ontología SIMEON (Similarity Measures Ontology) que representa los tipos de funciones de (di)similitud para cada tipo de colección. Gracias a estas ontologías, para comparar dos colecciones, una vez representadas como instancias de la clase más apropiada de la ontología E-Collections, automáticamente se sabe qué funciones de (di)similitud de la ontología SIMEON pueden utilizarse para su comparación. Abstract This thesis studies the representation, modeling and comparison of collections in the Semantic Web using ontologies. Collections, understood as groups of objects or elements with their own identities, are constructions that appear frequently in almost all areas of the real world. Therefore, it is essential to have conceptualizations of these abstract structures and representations of these conceptualizations in computer systems, that define their semantic properly. While in many areas of Computer Science and Artificial Intelligence, such as Programming, Databases or Information Retrieval, the collections have been extensively studied and there are representations that match many conceptualizations, in the field Semantic Web, however, their study has been quite limited. In fact, there are few representations of collections using ontologies so far, and they only cover some types of collections and have important limitations. This hinders a proper representation of collections and other common tasks like comparing collections, something critical in usual operations such as semantic search or linking data on the Semantic Web. To solve this problem this thesis makes a proposal for modelling collections based on a new classification of collections according to their structural characteristics (homogeneity, uniqueness, order and cardinality). This classification allows to define a taxonomy with up to 16 different types of collections. Among other advantages, this new classification can leverage the semantics of the structural properties of each type of collection to make comparisons using the most appropriate (dis)similarity functions. Thus, the thesis also develops a new catalog of similarity functions for the different types of collections. This catalog contains the most common (dis)similarity functions as well as new ones. This proposal is implemented through two parallel ontologies, the E-Collections ontology that represents the different types of collections in the taxonomy and their axiomatic, and the SIMEON ontology (Similarity Measures Ontology) that represents the types of (dis)similarity functions for each type of collection. Thanks to these ontologies, to compare two collections, once represented as instances of the appropriate class of E-Collections ontology, we can know automatically which (dis)similarity functions of the SIMEON ontology are suitable for the comparison. Finally, the feasibility and usefulness of this modeling and comparison of collections proposal is proved in the field of oenology, applying both E-Collections and SIMEON ontologies to the representation and comparison of wines with the E-Baco ontology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo seguido en este proyecto fin de carrera, ha sido el análisis de la evolución general de la tecnología, tanto electrónica, como en Telecomunicaciones e Informática, desde sus inicios hasta la actualidad, realizando un estudio sobre cómo esos avances han mejorado, incluso cambiado nuestra vida cotidiana y cómo a consecuencia de la conversión de muchos de estos productos tecnológicos en objeto de consumo masivo, la propia dinámica económico-comercial ha llevado a las empresas a la programación del fin del su vida útil con el objeto de incrementar su demanda y, consiguientemente, la producción. Se trata de lo que se conoce desde hace décadas como Obsolescencia Programada. Para realizar este documento se ha realizado un breve repaso por los acontecimientos históricos que promovieron el inicio de esta práctica, así como también nos ha llevado a realizar mención de todos esos aparatos tecnológicos que han sufrido una mayor evolución y por tanto son los más propensos a que se instauren en ellos, debido a cómo influyen dichos cambios en los consumidores. Otro de los aspectos más importantes que rodean a la Obsolescencia Programada, es cómo influye en el comportamiento de la sociedad, creando en ella actitudes consumistas, que precisamente son las que alimentan que el ciclo económico siga basándose en incluir esta práctica desde el diseño del producto, puesto que a mayor consumo, mayor producción, y por tanto, supone un aumento de beneficios para el fabricante. Aunque, lo que para algunos supone un amplio margen de rentabilidad, existen otros componentes de la ecuación que no salen tan bien parados, como pueden ser los países del tercer mundo que están siendo poblados de basura electrónica, y en general el medio ambiente y los recursos de los que hoy disponemos, que no son ilimitados, y que están viéndose gravemente afectados por la inserción de esta práctica. Como cierre de este análisis, se repasarán algunas de las técnicas empleadas en algunos de los productos más utilizados hoy en día y las conclusiones a las que se han llevado después de analizar todos estos puntos del tema. ABSTRACT. The main goal followed in this final project has been analyzing the overall evolution of technology, both electronics and telecommunications. Also, computer science; from beginning to present it has been realized some studies about these advances have improved our daily life, even changing our routine, and turn us massive consumers of all technological products on the market. This dynamic economic and commercial has led companies to programming the end of its useful life in order to increase demand, and consequently, the production. This is known as Obsolescence. To make this document has been made a brief review through historical events that promoted the opening of this practice, and also led us to make mention all these technological devices have undergone further evolution and therefore more likely to establish them, because these changes affect consumers. Another important aspect around planned obsolescence is how to influence in society’s behavior, creating consumerist attitudes, which are precisely those feed the economic cycle continues to be based on including this practice from product design, because increased consumption and production. Therefore, this fact increases profits for the manufacturer. Although, for some people suppose a wide margin of profitability, there are other components do not have such luck; for example, third World countries have been filled with e-waste and the environment, in general, and resources available today which are not unlimited, and who are seriously affected by the inclusion of this practice. To close this analysis, we will review a few techniques used in some products most widely used today, and the conclusions that have been made after analyzing all these points of the topic.