849 resultados para Artificial intelligence
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.
Resumo:
Aquest és un projecte que tracta sobre la indexació automà tica de continguts televisius. És una tasca que guanyarà importà ncia amb els imminents canvis que hi haurà en la televisió que coneixem. L'entrada de la nova televisió digital farà que hi hagi una interacció molt més fluida entre l'espectador i la cadena, a més de grans quantitats de canals, cada un amb programes de tipus totalment diferents. Tot això farà que tenir mètodes de cerca basats en els continguts d'aquests programes sigui del tot imprescindible. Aixà doncs, el nostre projecte està basat plenament en poder extreure alguns d'aquests descriptors que faran possible la categorització dels diferents programes televisius.
Resumo:
Aquest és un projecte sobre la indexació de continguts televisius; és a dir, el procés d’etiquetatge de programes televisius per facilitar cerques segons diferents parà metres. El món de la televisió està immers en un procés d'evolució i canvis grà cies a l'entrada de la televisió digital. Aquesta nova forma d'entendre la televisió obrirà un gran ventall de possibilitats i permetrà la interacció entre usuaris i emissora. El primer pas de la gestió de continguts consisteix en la indexació dels programes segons el contingut. Aquest és el nostre objectiu. Indexar els continguts televisius de manera automà tica mitjançant la intelligència artificial.
Resumo:
En aquest projecte, s'ha dissenyat, construït i programat un robot autònom, dotat de sistema de locomoció i sensors que li permeten navegar sense impactar en un entorn controlat. Per assolir aquests objectius s'ha dissenyat i programat una unitat de control que gestiona el hardware de baix volum de dades amb diferents modes d'operació, abstraient-lo en una única interfÃcie. Posteriorment s'ha integrat aquest sistema en l'entorn de robòtica Pyro. Aquest entorn permet usar i adaptar, segons es necessiti, eines d'intel·ligència artificial ja desenvolupades.
Resumo:
Estudi realitzat a partir d’una estada al Computer Science and Artificial Intelligence Lab, del Massachusetts Institute of Technology, entre 2006 i 2008. La recerca desenvolupada en aquest projecte se centra en mètodes d'aprenentatge automà tic per l'anà lisi sintà ctica del llenguatge. Com a punt de partida, establim que la complexitat del llenguatge exigeix no només entendre els processos computacionals associats al llenguatge sinó també entendre com es pot aprendre automà ticament el coneixement per a dur a terme aquests processos.
Resumo:
The paper discusses the utilization of new techniques ot select processes for protein recovery, separation and purification. It describesa rational approach that uses fundamental databases of proteins molecules to simplify the complex problem of choosing high resolution separation methods for multi component mixtures. It examines the role of modern computer techniques to help solving these questions.
Human biology: cibernética y biologÃa humana en el nuevo discurso expositivo de la década de 1970
Resumo:
En este trabajo se explica cuáles fueron las estrategias utilizadas y los resultados obtenidos en la primera exposición del nuevo esquema museográfico del Museo de Historia Natural de Londres, concebido por Roger Miles, Jefe del Departamento de Servicios Públicos de esa prestigiada institución. Esta iniciativa pretendÃa atraer a un mayor número de visitantes a partir de exposiciones basadas en modelos y módulos interactivos que relegaban a los objetos de las colecciones a un segundo plano. La exposición se tituló Human Biology y fue inaugurada el 24 de mayo de 1977. El tema de la exposición fue la biologÃa humana, pero como se argumenta en este trabajo, Human Biology sirvió también como medio para legitimar el discurso modernizador de la biologÃa humana, en tanto disciplina más rigurosa por las herramientas y técnicas más precisas que las utilizadas por la antropologÃa fÃsica tradicional. Se buscaba también generar una audiencia para reforzar el campo interdisciplinario de la ciencia cognitiva y en particular la inteligencia artificial. El equipo de asesores cientÃficos de la exposición contó entre sus miembros con personalidades que jugaron un papel protagónico en el desarrollo de esas disciplinas, y necesitaban demostrar su validez y utilidad ante los no especialistas y el público en general.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Resumo:
La intel·ligència d’eixams és una branca de la intel·ligència artificial que està agafant molta força en els últims temps, especialment en el camp de la robòtica. En aquest projecte estudiarem el comportament social sorgit de les interaccions entre un nombre determinat de robots autònoms en el camp de la neteja de grans superfÃcies. Un cop triat un escenari i un robot que s’ajustin als requeriments del projecte, realitzarem una sèrie de simulacions a partir de diferents polÃtiques de cerca que ens permetran avaluar el comportament dels robots per unes condicions inicials de distribució dels robots i zones a netejar. A partir dels resultats obtinguts serem capaços de determinar quina configuració genera millors resultats.
Resumo:
Etiologic research in psychiatry relies on an objectivist epistemology positing that human cognition is specified by the "reality" of the outer world, which consists of a totality of mind-independent objects. Truth is considered as some sort of correspondence relation between words and external objects, and mind as a mirror of nature. In our view, this epistemology considerably impedes etiologic research. Objectivist epistemology has been recently confronting a growing critique from diverse scientific fields. Alternative models in neurosciences (neuronal selection), artificial intelligence (connectionism), and developmental psychology (developmental biodynamics) converge in viewing living organisms as self-organizing systems. In this perspective, the organism is not specified by the outer world, but enacts its environment by selecting relevant domains of significance that constitute its world. The distinction between mind and body or organism and environment is a matter of observational perspective. These models from empirical sciences are compatible with fundamental tenets of philosophical phenomenology and hermeneutics. They imply consequences for research in psychopathology: symptoms cannot be viewed as disconnected manifestations of discrete localized brain dysfunctions. Psychopathology should therefore focus on how the person's self-coherence is maintained and on the understanding and empirical investigation of the systemic laws that govern neurodevelopment and the organization of human cognition.
Resumo:
We present a method for segmenting white matter tracts from high angular resolution diffusion MR. images by representing the data in a 5 dimensional space of position and orientation. Whereas crossing fiber tracts cannot be separated in 3D position space, they clearly disentangle in 5D position-orientation space. The segmentation is done using a 5D level set method applied to hyper-surfaces evolving in 5D position-orientation space. In this paper we present a methodology for constructing the position-orientation space. We then show how to implement the standard level set method in such a non-Euclidean high dimensional space. The level set theory is basically defined for N-dimensions but there are several practical implementation details to consider, such as mean curvature. Finally, we will show results from a synthetic model and a few preliminary results on real data of a human brain acquired by high angular resolution diffusion MRI.
Resumo:
Estudi i implementació d'un sistema multiagent intel·ligent i la seva aplicació a sistemes difusos. Utilització de les llibreries JADE i JFuzzyLogic.
Resumo:
The purpose of this paper is to propose a Neural-Q_learning approach designed for online learning of simple and reactive robot behaviors. In this approach, the Q_function is generalized by a multi-layer neural network allowing the use of continuous states and actions. The algorithm uses a database of the most recent learning samples to accelerate and guarantee the convergence. Each Neural-Q_learning function represents an independent, reactive and adaptive behavior which maps sensorial states to robot control actions. A group of these behaviors constitutes a reactive control scheme designed to fulfill simple missions. The paper centers on the description of the Neural-Q_learning based behaviors showing their performance with an underwater robot in a target following task. Real experiments demonstrate the convergence and stability of the learning system, pointing out its suitability for online robot learning. Advantages and limitations are discussed
Resumo:
This paper presents a hybrid behavior-based scheme using reinforcement learning for high-level control of autonomous underwater vehicles (AUVs). Two main features of the presented approach are hybrid behavior coordination and semi on-line neural-Q_learning (SONQL). Hybrid behavior coordination takes advantages of robustness and modularity in the competitive approach as well as efficient trajectories in the cooperative approach. SONQL, a new continuous approach of the Q_learning algorithm with a multilayer neural network is used to learn behavior state/action mapping online. Experimental results show the feasibility of the presented approach for AUVs