843 resultados para Artificial intelligence algorithms
Resumo:
The development of ubiquitous computing (ubicomp) environments raises several challenges in terms of their evaluation. Ubicomp virtual reality prototyping tools enable users to experience the system to be developed and are of great help to face those challenges, as they support developers in assessing the consequences of a design decision in the early phases of development. Given the situated nature of ubicomp environments, a particular issue to consider is the level of realism provided by the prototypes. This work presents a case study where two ubicomp prototypes, featuring different levels of immersion (desktop-based versus CAVE-based), were developed and compared. The goal was to determine the cost/benefits relation of both solutions, which provided better user experience results, and whether or not simpler solutions provide the same user experience results as more elaborate one.
Resumo:
Model finders are very popular for exploring scenarios, helping users validate specifications by navigating through conforming model instances. To be practical, the semantics of such scenario exploration operations should be formally defined and, ideally, controlled by the users, so that they are able to quickly reach interesting scenarios. This paper explores the landscape of scenario exploration operations, by formalizing them with a relational model finder. Several scenario exploration operations provided by existing tools are formalized, and new ones are proposed, namely to allow the user to easily explore very similar (or different) scenarios, by attaching preferences to model elements. As a proof-of-concept, such operations were implemented in the popular Alloy Analyzer, further increasing its usefulness for (user-guided) scenario exploration.
Resumo:
Temporal logics targeting real-time systems are traditionally undecidable. Based on a restricted fragment of MTL-R, we propose a new approach for the runtime verification of hard real-time systems. The novelty of our technique is that it is based on incremental evaluation, allowing us to e↵ectively treat duration properties (which play a crucial role in real-time systems). We describe the two levels of operation of our approach: offline simplification by quantifier removal techniques; and online evaluation of a three-valued interpretation for formulas of our fragment. Our experiments show the applicability of this mechanism as well as the validity of the provided complexity results.
Resumo:
This paper introduces the metaphorism pattern of relational specification and addresses how specification following this pattern can be refined into recursive programs. Metaphorisms express input-output relationships which preserve relevant information while at the same time some intended optimization takes place. Text processing, sorting, representation changers, etc., are examples of metaphorisms. The kind of metaphorism refinement proposed in this paper is a strategy known as change of virtual data structure. It gives sufficient conditions for such implementations to be calculated using relation algebra and illustrates the strategy with the derivation of quicksort as example.
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.
Resumo:
Aquest és un projecte que tracta sobre la indexació automàtica de continguts televisius. És una tasca que guanyarà importància amb els imminents canvis que hi haurà en la televisió que coneixem. L'entrada de la nova televisió digital farà que hi hagi una interacció molt més fluida entre l'espectador i la cadena, a més de grans quantitats de canals, cada un amb programes de tipus totalment diferents. Tot això farà que tenir mètodes de cerca basats en els continguts d'aquests programes sigui del tot imprescindible. Així doncs, el nostre projecte està basat plenament en poder extreure alguns d'aquests descriptors que faran possible la categorització dels diferents programes televisius.
Resumo:
Aquest és un projecte sobre la indexació de continguts televisius; és a dir, el procés d’etiquetatge de programes televisius per facilitar cerques segons diferents paràmetres. El món de la televisió està immers en un procés d'evolució i canvis gràcies a l'entrada de la televisió digital. Aquesta nova forma d'entendre la televisió obrirà un gran ventall de possibilitats i permetrà la interacció entre usuaris i emissora. El primer pas de la gestió de continguts consisteix en la indexació dels programes segons el contingut. Aquest és el nostre objectiu. Indexar els continguts televisius de manera automàtica mitjançant la intelligència artificial.
Resumo:
En aquest projecte, s'ha dissenyat, construït i programat un robot autònom, dotat de sistema de locomoció i sensors que li permeten navegar sense impactar en un entorn controlat. Per assolir aquests objectius s'ha dissenyat i programat una unitat de control que gestiona el hardware de baix volum de dades amb diferents modes d'operació, abstraient-lo en una única interfície. Posteriorment s'ha integrat aquest sistema en l'entorn de robòtica Pyro. Aquest entorn permet usar i adaptar, segons es necessiti, eines d'intel·ligència artificial ja desenvolupades.
Resumo:
Estudi realitzat a partir d’una estada al Computer Science and Artificial Intelligence Lab, del Massachusetts Institute of Technology, entre 2006 i 2008. La recerca desenvolupada en aquest projecte se centra en mètodes d'aprenentatge automàtic per l'anàlisi sintàctica del llenguatge. Com a punt de partida, establim que la complexitat del llenguatge exigeix no només entendre els processos computacionals associats al llenguatge sinó també entendre com es pot aprendre automàticament el coneixement per a dur a terme aquests processos.
Resumo:
The paper discusses the utilization of new techniques ot select processes for protein recovery, separation and purification. It describesa rational approach that uses fundamental databases of proteins molecules to simplify the complex problem of choosing high resolution separation methods for multi component mixtures. It examines the role of modern computer techniques to help solving these questions.
Resumo:
En este trabajo se explica cuáles fueron las estrategias utilizadas y los resultados obtenidos en la primera exposición del nuevo esquema museográfico del Museo de Historia Natural de Londres, concebido por Roger Miles, Jefe del Departamento de Servicios Públicos de esa prestigiada institución. Esta iniciativa pretendía atraer a un mayor número de visitantes a partir de exposiciones basadas en modelos y módulos interactivos que relegaban a los objetos de las colecciones a un segundo plano. La exposición se tituló Human Biology y fue inaugurada el 24 de mayo de 1977. El tema de la exposición fue la biología humana, pero como se argumenta en este trabajo, Human Biology sirvió también como medio para legitimar el discurso modernizador de la biología humana, en tanto disciplina más rigurosa por las herramientas y técnicas más precisas que las utilizadas por la antropología física tradicional. Se buscaba también generar una audiencia para reforzar el campo interdisciplinario de la ciencia cognitiva y en particular la inteligencia artificial. El equipo de asesores científicos de la exposición contó entre sus miembros con personalidades que jugaron un papel protagónico en el desarrollo de esas disciplinas, y necesitaban demostrar su validez y utilidad ante los no especialistas y el público en general.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Estudi de comportaments socials d'aixams robòtics amb aplicació a la neteja d'espais no estructurats
Resumo:
La intel·ligència d’eixams és una branca de la intel·ligència artificial que està agafant molta força en els últims temps, especialment en el camp de la robòtica. En aquest projecte estudiarem el comportament social sorgit de les interaccions entre un nombre determinat de robots autònoms en el camp de la neteja de grans superfícies. Un cop triat un escenari i un robot que s’ajustin als requeriments del projecte, realitzarem una sèrie de simulacions a partir de diferents polítiques de cerca que ens permetran avaluar el comportament dels robots per unes condicions inicials de distribució dels robots i zones a netejar. A partir dels resultats obtinguts serem capaços de determinar quina configuració genera millors resultats.
Resumo:
Etiologic research in psychiatry relies on an objectivist epistemology positing that human cognition is specified by the "reality" of the outer world, which consists of a totality of mind-independent objects. Truth is considered as some sort of correspondence relation between words and external objects, and mind as a mirror of nature. In our view, this epistemology considerably impedes etiologic research. Objectivist epistemology has been recently confronting a growing critique from diverse scientific fields. Alternative models in neurosciences (neuronal selection), artificial intelligence (connectionism), and developmental psychology (developmental biodynamics) converge in viewing living organisms as self-organizing systems. In this perspective, the organism is not specified by the outer world, but enacts its environment by selecting relevant domains of significance that constitute its world. The distinction between mind and body or organism and environment is a matter of observational perspective. These models from empirical sciences are compatible with fundamental tenets of philosophical phenomenology and hermeneutics. They imply consequences for research in psychopathology: symptoms cannot be viewed as disconnected manifestations of discrete localized brain dysfunctions. Psychopathology should therefore focus on how the person's self-coherence is maintained and on the understanding and empirical investigation of the systemic laws that govern neurodevelopment and the organization of human cognition.