19 resultados para Autonomic
em Universidad Politécnica de Madrid
Resumo:
The increasing complexity of current software systems is encouraging the development of self-managed software architectures, i.e. systems capable of reconfiguring their structure at runtime to fulfil a set of goals. Several approaches have covered different aspects of their development, but some issues remain open, such as the maintainability or the scalability of self-management subsystems. Centralized approaches, like self-adaptive architectures, offer good maintenance properties but do not scale well for large systems. On the contrary, decentralized approaches, like self-organising architectures, offer good scalability but are not maintainable: reconfiguration specifications are spread and often tangled with functional specifications. In order to address these issues, this paper presents an aspect-oriented autonomic reconfiguration approach where: (1) each subsystem is provided with self-management properties so it can evolve itself and the components that it is composed of; (2) self-management concerns are isolated and encapsulated into aspects, thus improving its reuse and maintenance. Povzetek: Predstavljen je pristop s samo-preoblikovanjem programske arhitekture.
Resumo:
Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.
Resumo:
Cambios en la presión arterial tras un beta-bloqueante.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.
Resumo:
The magazine of the Spanish Nuclear Society (SNE), “Nuclear España” is a scientific-technical publication with almost thirty years of uninterrupted edition and more than 300 numbers published. Their pages approach technical subjects related to the nuclear energy, as well as the activities developed by the SNE, especially in national and international meetings. The main part of the magazine is composed by articles written by known specialist of the energy industry. One of the top goals of the magazine is to help on transferring the knowledge from the generation that built the nuclear power plants in Spain and the new generation of professionals that have started its nuclear career in the last years. Each number is monographic, trying to cover as many aspects on an issue as it is possible, with collaborations from the companies, the research centers and universities that helps to have complementary points of view. On the other hand the articles help to deep in the issue´s topic, broadening the view of the readers about the nuclear field and helping to share knowledge across the industry. The news section of the Magazine picks up the actuality of the sector as a whole. The editorial section reflects the opinion of the SNE Governing Board and the Magazine Committee on the subjects of interest in this field. On the other hand, the monthly interview sets out the professional outstanding opinions. With a total of eleven numbers per year, three of them have a noticeable international character: the one dedicated to the operative experiences on the Spanish and European nuclear power plants, the monographic issue devoted tothe Annual Meeting of the SNE and the international issue, which covers the last activities of the Spanish industry in international projects. Both first are bilingual issues (Spanish-English), whereas the international edition is published completely in English. Besides its diffusion through all the members of the SNE, the Magazine is distributed, in the national scope, to companies and organisms related to the nuclear power, universities, research centers, representatives of the Central, Autonomic and Local Administrations, mass media and communication professionals. It is also sent to the utilities and research centers in Europe, United States, South America and Asia.
Resumo:
Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems.
Resumo:
Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
Los sistemas técnicos son cada vez más complejos, incorporan funciones más avanzadas, están más integrados con otros sistemas y trabajan en entornos menos controlados. Todo esto supone unas condiciones más exigentes y con mayor incertidumbre para los sistemas de control, a los que además se demanda un comportamiento más autónomo y fiable. La adaptabilidad de manera autónoma es un reto para tecnologías de control actualmente. El proyecto de investigación ASys propone abordarlo trasladando la responsabilidad de la capacidad de adaptación del sistema de los ingenieros en tiempo de diseño al propio sistema en operación. Esta tesis pretende avanzar en la formulación y materialización técnica de los principios de ASys de cognición y auto-consciencia basadas en modelos y autogestión de los sistemas en tiempo de operación para una autonomía robusta. Para ello el trabajo se ha centrado en la capacidad de auto-conciencia, inspirada en los sistemas biológicos, y se ha explorado la posibilidad de integrarla en la arquitectura de los sistemas de control. Además de la auto-consciencia, se han explorado otros temas relevantes: modelado funcional, modelado de software, tecnología de los patrones, tecnología de componentes, tolerancia a fallos. Se ha analizado el estado de la técnica en los ámbitos pertinentes para las cuestiones de la auto-consciencia y la adaptabilidad en sistemas técnicos: arquitecturas cognitivas, control tolerante a fallos, y arquitecturas software dinámicas y computación autonómica. El marco teórico de ASys existente de sistemas autónomos cognitivos ha sido adaptado para servir de base para este análisis de autoconsciencia y adaptación y para dar sustento conceptual al posterior desarrollo de la solución. La tesis propone una solución general de diseño para la construcción de sistemas autónomos auto-conscientes. La idea central es la integración de un meta-controlador en la arquitectura de control del sistema autónomo, capaz de percibir la estado funcional del sistema de control y, si es necesario, reconfigurarlo en tiempo de operación. Esta solución de metacontrol se ha formalizado en cuatro patrones de diseño: i) el Patrón Metacontrol, que define la integración de un subsistema de metacontrol, responsable de controlar al propio sistema de control a través de la interfaz proporcionada por su plataforma de componentes, ii) el patrón Bucle de Control Epistémico, que define un bucle de control cognitivo basado en el modelos y que se puede aplicar al diseño del metacontrol, iii) el patrón de Reflexión basada en Modelo Profundo propone una solución para construir el modelo ejecutable utilizado por el meta-controlador mediante una transformación de modelo a modelo a partir del modelo de ingeniería del sistema, y, finalmente, iv) el Patrón Metacontrol Funcional, que estructura el meta-controlador en dos bucles, uno para el control de la configuración de los componentes del sistema de control, y otro sobre éste, controlando las funciones que realiza dicha configuración de componentes; de esta manera las consideraciones funcionales y estructurales se desacoplan. La Arquitectura OM y el metamodelo TOMASys son las piezas centrales del marco arquitectónico desarrollado para materializar la solución compuesta de los patrones anteriores. El metamodelo TOMASys ha sido desarrollado para la representación de la estructura y su relación con los requisitos funcionales de cualquier sistema autónomo. La Arquitectura OM es un patrón de referencia para la construcción de una metacontrolador integrando los patrones de diseño propuestos. Este meta-controlador se puede integrar en la arquitectura de cualquier sistema control basado en componentes. El elemento clave de su funcionamiento es un modelo TOMASys del sistema decontrol, que el meta-controlador usa para monitorizarlo y calcular las acciones de reconfiguración necesarias para adaptarlo a las circunstancias en cada momento. Un proceso de ingeniería, complementado con otros recursos, ha sido elaborado para guiar la aplicación del marco arquitectónico OM. Dicho Proceso de Ingeniería OM define la metodología a seguir para construir el subsistema de metacontrol para un sistema autónomo a partir del modelo funcional del mismo. La librería OMJava proporciona una implementación del meta-controlador OM que se puede integrar en el control de cualquier sistema autónomo, independientemente del dominio de la aplicación o de su tecnología de implementación. Para concluir, la solución completa ha sido validada con el desarrollo de un robot móvil autónomo que incorpora un meta-controlador con la Arquitectura OM. Las propiedades de auto-consciencia y adaptación proporcionadas por el meta-controlador han sido validadas en diferentes escenarios de operación del robot, en los que el sistema era capaz de sobreponerse a fallos en el sistema de control mediante reconfiguraciones orquestadas por el metacontrolador. ABSTRACT Technical systems are becoming more complex, they incorporate more advanced functionalities, they are more integrated with other systems and they are deployed in less controlled environments. All this supposes a more demanding and uncertain scenario for control systems, which are also required to be more autonomous and dependable. Autonomous adaptivity is a current challenge for extant control technologies. The ASys research project proposes to address it by moving the responsibility for adaptivity from the engineers at design time to the system at run-time. This thesis has intended to advance in the formulation and technical reification of ASys principles of model-based self-cognition and having systems self-handle at runtime for robust autonomy. For that it has focused on the biologically inspired capability of self-awareness, and explored the possibilities to embed it into the very architecture of control systems. Besides self-awareness, other themes related to the envisioned solution have been explored: functional modeling, software modeling, patterns technology, components technology, fault tolerance. The state of the art in fields relevant for the issues of self-awareness and adaptivity has been analysed: cognitive architectures, fault-tolerant control, and software architectural reflection and autonomic computing. The extant and evolving ASys Theoretical Framework for cognitive autonomous systems has been adapted to provide a basement for this selfhood-centred analysis and to conceptually support the subsequent development of our solution. The thesis proposes a general design solution for building self-aware autonomous systems. Its central idea is the integration of a metacontroller in the control architecture of the autonomous system, capable of perceiving the functional state of the control system and reconfiguring it if necessary at run-time. This metacontrol solution has been formalised into four design patterns: i) the Metacontrol Pattern, which defines the integration of a metacontrol subsystem, controlling the domain control system through an interface provided by its implementation component platform, ii) the Epistemic Control Loop pattern, which defines a modelbased cognitive control loop that can be applied to the design of such a metacontroller, iii) the Deep Model Reflection pattern proposes a solution to produce the online executable model used by the metacontroller by model-to-model transformation from the engineering model, and, finally, iv) the Functional Metacontrol pattern, which proposes to structure the metacontroller in two loops, one for controlling the configuration of components of the controller, and another one on top of the former, controlling the functions being realised by that configuration; this way the functional and structural concerns become decoupled. The OM Architecture and the TOMASys metamodel are the core pieces of the architectural framework developed to reify this patterned solution. The TOMASys metamodel has been developed for representing the structure and its relation to the functional requirements of any autonomous system. The OM architecture is a blueprint for building a metacontroller according to the patterns. This metacontroller can be integrated on top of any component-based control architecture. At the core of its operation lies a TOMASys model of the control system. An engineering process and accompanying assets have been constructed to complete and exploit the architectural framework. The OM Engineering Process defines the process to follow to develop the metacontrol subsystem from the functional model of the controller of the autonomous system. The OMJava library provides a domain and application-independent implementation of an OM Metacontroller than can be used in the implementation phase of OMEP. Finally, the complete solution has been validated in the development of an autonomous mobile robot that incorporates an OM metacontroller. The functional selfawareness and adaptivity properties achieved thanks to the metacontrol system have been validated in different scenarios. In these scenarios the robot was able to overcome failures in the control system thanks to reconfigurations performed by the metacontroller.
Resumo:
A lo largo de las últimas décadas el desarrollo de la tecnología en muy distintas áreas ha sido vertiginoso. Su propagación a todos los aspectos de nuestro día a día parece casi inevitable y la electrónica de consumo ha invadido nuestros hogares. No obstante, parece que la domótica no ha alcanzado el grado de integración que cabía esperar hace apenas una década. Es cierto que los dispositivos autónomos y con un cierto grado de inteligencia están abriéndose paso de manera independiente, pero el hogar digital, como sistema capaz de abarcar y automatizar grandes conjuntos de elementos de una vivienda (gestión energética, seguridad, bienestar, etc.) no ha conseguido extenderse al hogar medio. Esta falta de integración no se debe a la ausencia de tecnología, ni mucho menos, y numerosos son los estudios y proyectos surgidos en esta dirección. Sin embargo, no ha sido hasta hace unos pocos años que las instituciones y grandes compañías han comenzado a prestar verdadero interés en este campo. Parece que estamos a punto de experimentar un nuevo cambio en nuestra forma de vida, concretamente en la manera en la que interactuamos con nuestro hogar y las comodidades e información que este nos puede proporcionar. En esa corriente se desarrolla este Proyecto Fin de Grado, con el objetivo de aportar un nuevo enfoque a la manera de integrar los diferentes dispositivos del hogar digital con la inteligencia artificial y, lo que es más importante, al modo en el que el usuario interactúa con su vivienda. Más concretamente, se pretende desarrollar un sistema capaz de tomar decisiones acordes al contexto y a las preferencias del usuario. A través de la utilización de diferentes tecnologías se dotará al hogar digital de cierta autonomía a la hora de decidir qué acciones debe llevar a cabo sobre los dispositivos que contiene, todo ello mediante la interpretación de órdenes procedentes del usuario (expresadas de manera coloquial) y el estudio del contexto que envuelve al instante de ejecución. Para la interacción entre el usuario y el hogar digital se desarrollará una aplicación móvil mediante la cual podrá expresar (de manera conversacional) las órdenes que quiera dar al sistema, el cual intervendrá en la conversación y llevará a cabo las acciones oportunas. Para todo ello, el sistema hará principalmente uso de ontologías, análisis semántico, redes bayesianas, UPnP y Android. Se combinará información procedente del usuario, de los sensores y de fuentes externas para determinar, a través de las citadas tecnologías, cuál es la operación que debe realizarse para satisfacer las necesidades del usuario. En definitiva, el objetivo final de este proyecto es diseñar e implementar un sistema innovador que se salga de la corriente actual de interacción mediante botones, menús y formularios a los que estamos tan acostumbrados, y que permita al usuario, en cierto modo, hablar con su vivienda y expresarle sus necesidades, haciendo a la tecnología un poco más transparente y cercana y aproximándonos un poco más a ese concepto de hogar inteligente que imaginábamos a finales del siglo XX. ABSTRACT. Over the last decades the development of technology in very different areas has happened incredibly fast. Its propagation to all aspects of our daily activities seems to be inevitable and the electronic devices have invaded our homes. Nevertheless, home automation has not reached the integration point that it was supposed to just a few decades ago. It is true that some autonomic and relatively intelligent devices are emerging, but the digital home as a system able to control a large set of elements from a house (energy management, security, welfare, etc.) is not present yet in the average home. That lack of integration is not due to the absence of technology and, in fact, there are a lot of investigations and projects focused on this field. However, the institutions and big companies have not shown enough interest in home automation until just a few years ago. It seems that, finally, we are about to experiment another change in our lifestyle and how we interact with our home and the information and facilities it can provide. This Final Degree Project is developed as part of this trend, with the goal of providing a new approach to the way the system could integrate the home devices with the artificial intelligence and, mainly, to the way the user interacts with his house. More specifically, this project aims to develop a system able to make decisions, taking into account the context and the user preferences. Through the use of several technologies and approaches, the system will be able to decide which actions it should perform based on the order interpretation (expressed colloquially) and the context analysis. A mobile application will be developed to enable the user-home interaction. The user will be able to express his orders colloquially though out a conversational mode, and the system will also participate in the conversation, performing the required actions. For providing all this features, the system will mainly use ontologies, semantic analysis, Bayesian networks, UPnP and Android. Information from the user, the sensors and external sources will be combined to determine, through the use of these technologies, which is the operation that the system should perform to meet the needs of the user. In short, the final goal of this project is to design and implement an innovative system, away from the current trend of buttons, menus and forms. In a way, the user will be able to talk to his home and express his needs, experiencing a technology closer to the people and getting a little closer to that concept of digital home that we imagined in the late twentieth century.
Resumo:
Existe una proliferación de los llamados Smart Products. Ello es debido a que cada vez se apueste más por este tipo de productos tanto en la vida cotidiana como en el sector industrial. Sin embargo el término Smart Product se utiliza con diferentes acepciones en diferentes contextos o dominios de aplicación. La utilización del término con una semántica diferente de la habitual en un contexto puede llevar a problemas serios de compresión. El objetivo de este trabajo es analizar las diferentes definiciones de Smart Products—Productos Inteligentes, Smart Products en terminología inglesa, ampliamente utilizada—que aparecen en la literatura con el objeto de estudiar los diferentes matices y alcances que ofrecen para valorar si es posible obtener una definición de consenso que satisfaga a todas las partes, y especificarla. Con el fin de poder abarcar definiciones conexas introducimos el concepto Smart Thing—este concepto incluirá aquellas definiciones que puedan estar relacionadas con los Smart Products, como es el caso de los Intelligent Products, Smart Objects, Intelligent Systems, Intelligent Object. Para poder analizar las diferentes definiciones existentes en la literatura existente realizamos una Revisión Sistemática de la Literatura. El enfoque de Computación Autonómica—Autonomic Computing—tiene varios aspectos en común con Smart Products. Por ello una vez analizadas las diferentes definiciones existentes en la literatura hemos procedido a estudiar los puntos en común que tienen con Autonomic Computing, con el fin de valorar si Autonomic Computing es un enfoque adecuado en el que nos podamos apoyar para especificar, y diseñar Smart Products.
Resumo:
INTRODUCCIÓN: El riesgo de padecer enfermedades cardiovasculares y los índices de obesidad infantil han ido en aumento durante los últimos años empobreciendo la salud de la población. La Teoría de Barker relaciona el estado de salud de la madre con el desarrollo fetal, asociando a un deficiente estado físico y hábitos de vida negativos de la mujer embarazada con el aumento del riesgo de padecer cardiopatías en la infancia y adolescencia, así como predisponer al recién nacido a padecer sobrepeso y/u obesidad en su vida posterior. Por otro lado los estudios efectuados sobre ejercicio físico durante el embarazo reportan beneficios para salud materna y fetal. Uno de los parámetros más utilizados para comprobar la salud fetal es su frecuencia cardiaca, mediante la que se comprueba el buen desarrollo del sistema nervioso autónomo. Si se observa este parámetro en presencia de ejercicio materno podría encontrarse una respuesta crónica del corazón fetal al ejercicio materno como consecuencia de una adaptación y mejora en el funcionamiento del sistema nervioso autónomo del feto. De esta forma podría mejorar su salud cardiovascular intrauterina, lo que podría mantenerse en su vida posterior descendiendo el riesgo de padecer enfermedades cardiovasculares en la edad adulta. OBJETIVOS: Conocer la influencia de un programa de ejercicio físico supervisado en la frecuencia cardiaca fetal (FCF) en reposo y después del ejercicio materno en relación con gestantes sedentarias mediante la realización de un protocolo específico. Conocer la influencia de un programa de ejercicio físico en el desarrollo del sistema nervioso autónomo fetal, relacionado con el tiempo de recuperación de la FCF. MATERIAL Y MÉTODO: Se diseñó un ensayo clínico aleatorizado multicéntrico en el que participaron 81 gestantes (GC=38, GE=43). El estudio fue aprobado por el comité ético de los hospitales que participaron en el estudio. Todas las gestantes fueron informadas y firmaron un consentimiento para su participación en el estudio. Las participantes del GE recibieron una intervención basada en un programa de ejercicio físico desarrollado durante la gestación (12-36 semanas de gestación) con una frecuencia de tres veces por semana. Todas las gestantes realizaron un protocolo de medida de la FCF entre las semanas 34-36 de gestación. Dicho protocolo consistía en dos test llevados a cabo caminando a diferentes intensidades (40% y 60% de la frecuencia cardiaca de reserva). De este protocolo se obtuvieron las principales variables de estudio: FCF en reposo, FCF posejercicio al 40 y al 60% de intensidad, tiempo de recuperación de la frecuencia cardiaca fetal en ambos esfuerzos. El material utilizado para la realización del protocolo fue un monitor de frecuencia cardiaca para controlar la frecuencia cardiaca de la gestante y un monitor fetal inalámbrico (telemetría fetal) para registrar el latido fetal durante todo el protocolo. RESULTADOS: No se encontraron diferencias estadísticamente significativas en la FCF en reposo entre grupos (GE=140,88 lat/min vs GC= 141,95 lat/min; p>,05). Se encontraron diferencias estadísticamente significativas en el tiempo de recuperación de la FCF entre los fetos de ambos grupos (GE=135,65 s vs GC=426,11 s esfuerzo al 40%; p<,001); (GE=180,26 s vs GC=565,61 s esfuerzo al 60%; p<,001). Se encontraron diferencias estadísticamente significativas en la FCF posejercicio al 40% (GE=139,93 lat/min vs GC=147,87 lat/min; p<,01). No se encontraron diferencias estadísticamente significativas en la FCF posejercicio al 60% (GE=143,74 lat/min vs GC=148,08 lat/min; p>,05). CONLUSIÓN: El programa de ejercicio físico desarrollado durante la gestación influyó sobre el corazón fetal de los fetos de las gestantes del GE en relación con el tiempo de recuperación de la FCF. Los resultados muestran un posible mejor funcionamiento del sistema nervioso autónomo en fetos de gestantes activas durante el embarazo. ABSTRACT INTRODUCTION: The risk to suffer cardiovascular diseases and childhood obesity index has grown in the last years worsening the health around the population. Barker´s Theory related maternal health with fetal development establishing an association between a poorly physical state and an unhealthy lifestyle in the pregnant woman with the risk to suffer heart disease during childhood and adolescence, childhood overweight and/or obese is related to maternal lifestyle. By the other way researches carried out about physical exercise and pregnancy show benefits in maternal and fetal health. One of the most studied parameters to check fetal health is its heart rate, correct fetal autonomic nervous system development and work is also corroborated by fetal heart rate. Looking at this parameter during maternal exercise a chronic response of fetal heart could be found due to an adaptation and improvement in the working of the autonomic nervous system. Therefore its cardiovascular health could be enhanced during its intrauterine life and maybe it could be maintained in its posterior life descending the risk to suffer cardiovascular diseases in adult life. OBJECTIVES: To know the influence of a supervised physical activity program in the fetal heart rate (FHR) at rest, FHR after maternal exercise related to sedentary pregnant women by a FHR assessment protocol. To know the influence of a physical activity program in the development of the autonomic nervous system related to FHR recovery time. MATERIAL AND METHOD: A multicentric randomized clinical trial was design in which 81 pregnant women participated (CG=38, EG=43). The study was approved by the ethics committee of all of the hospitals participating in the study. All of the participants signed an informed consent for their participation in the study. EG participants received an intervention based on a physical activity program carried out during gestation (12-36 gestation weeks) with a three days a week frequency. All of the participants were tested between 34-36 weeks of gestation by a specific FHR assessment protocol. The mentioned protocol consisted in two test performed walking and at a two different intensities (40% and 60% of the reserve heart rate). From this protocol we obtained the main research variables: FHR at rest, FHR post-exercise at 40% and 60% intensity, and FHR recovery time at both walking test. The material used to perform the protocol were a FH monitor to check maternal HR and a wireless fetal monitor (Telemetry) to register fetal beats during the whole protocol. RESULTS: There were no statistical differences in FHR at rest between groups (EG=140,88 beats/min vs CG= 141,95 beats/min; p>,05). There were statistical differences in FHR recovery time in both walking tests between groups (EG=135,65 s vs CG=426,11 s test at 40% intensity; p<,001); (EG=180,26 s vs CG=565,61 s test at 60% intensity; p<,001). Statistical differences were found in FHR post-exercise at 40% intensity between groups (EG=139,93 beats/min vs CG=147,87 beats/min; p<,01). No statistical differences were found in FHR at rest post-exercise at 60% intensity between groups (EG=143,74 beats/min vs CG=148,08 beats/min; p>,05). CONCLUSIONS: The physical activity program performed during gestation had an influence in fetal heart of the fetus from mother in the EG related to FHR recovery time. These results show a possible enhancement on autonomic nervous system working in fetus from active mothers during gestation.
Resumo:
Durante los últimos años ha aumentado la presencia de personas pertenecientes al mundo de la política en la red debido a la proliferación de las redes sociales, siendo Twitter la que mayor repercusión mediática tiene en este ámbito. El estudio del comportamiento de los políticos en Twitter y de la acogida que tienen entre los ciudadanos proporciona información muy valiosa a la hora de analizar las campañas electorales. De esta forma, se puede estudiar la repercusión real que tienen sus mensajes en los resultados electorales, así como distinguir aquellos comportamientos que tienen una mayor aceptación por parte de la la ciudadaná. Gracias a los avances desarrollados en el campo de la minería de textos, se poseen las herramientas necesarias para analizar un gran volumen de textos y extraer de ellos información de utilidad. Este proyecto tiene como finalidad recopilar una muestra significativa de mensajes de Twitter pertenecientes a los candidatos de los principales partidos políticos que se presentan a las elecciones autonómicas de Madrid en 2015. Estos mensajes, junto con las respuestas de otros usuarios, se han analizado usando algoritmos de aprendizaje automático y aplicando las técnicas de minería de textos más oportunas. Los resultados obtenidos para cada político se han examinado en profundidad y se han presentado mediante tablas y gráficas para facilitar su comprensión.---ABSTRACT---During the past few years the presence on the Internet of people related with politics has increased, due to the proliferation of social networks. Among all existing social networks, Twitter is the one which has the greatest media impact in this field. Therefore, an analysis of the behaviour of politicians in this social network, along with the response from the citizens, gives us very valuable information when analysing electoral campaigns. This way it is possible to know their messages impact in the election results. Moreover, it can be inferred which behaviours have better acceptance among the citizenship. Thanks to the advances achieved in the text mining field, its tools can be used to analyse a great amount of texts and extract from them useful information. The present project aims to collect a significant sample of Twitter messages from the candidates of the principal political parties for the 2015 autonomic elections in Madrid. These messages, as well as the answers received by the other users, have been analysed using machine learning algorithms and applying the most suitable data mining techniques. The results obtained for each politician have been examined in depth and have been presented using tables and graphs to make its understanding easier.
Resumo:
Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.