680 resultados para Workload.
Resumo:
La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.
Resumo:
Entendemos por inteligencia colectiva una forma de inteligencia que surge de la colaboración y la participación de varios individuos o, siendo más estrictos, varias entidades. En base a esta sencilla definición podemos observar que este concepto es campo de estudio de las más diversas disciplinas como pueden ser la sociología, las tecnologías de la información o la biología, atendiendo cada una de ellas a un tipo de entidades diferentes: seres humanos, elementos de computación o animales. Como elemento común podríamos indicar que la inteligencia colectiva ha tenido como objetivo el ser capaz de fomentar una inteligencia de grupo que supere a la inteligencia individual de las entidades que lo forman a través de mecanismos de coordinación, cooperación, competencia, integración, diferenciación, etc. Sin embargo, aunque históricamente la inteligencia colectiva se ha podido desarrollar de forma paralela e independiente en las distintas disciplinas que la tratan, en la actualidad, los avances en las tecnologías de la información han provocado que esto ya no sea suficiente. Hoy en día seres humanos y máquinas a través de todo tipo de redes de comunicación e interfaces, conviven en un entorno en el que la inteligencia colectiva ha cobrado una nueva dimensión: ya no sólo puede intentar obtener un comportamiento superior al de sus entidades constituyentes sino que ahora, además, estas inteligencias individuales son completamente diferentes unas de otras y aparece por lo tanto el doble reto de ser capaces de gestionar esta gran heterogeneidad y al mismo tiempo ser capaces de obtener comportamientos aún más inteligentes gracias a las sinergias que los distintos tipos de inteligencias pueden generar. Dentro de las áreas de trabajo de la inteligencia colectiva existen varios campos abiertos en los que siempre se intenta obtener unas prestaciones superiores a las de los individuos. Por ejemplo: consciencia colectiva, memoria colectiva o sabiduría colectiva. Entre todos estos campos nosotros nos centraremos en uno que tiene presencia en la práctica totalidad de posibles comportamientos inteligentes: la toma de decisiones. El campo de estudio de la toma de decisiones es realmente amplio y dentro del mismo la evolución ha sido completamente paralela a la que citábamos anteriormente en referencia a la inteligencia colectiva. En primer lugar se centró en el individuo como entidad decisoria para posteriormente desarrollarse desde un punto de vista social, institucional, etc. La primera fase dentro del estudio de la toma de decisiones se basó en la utilización de paradigmas muy sencillos: análisis de ventajas e inconvenientes, priorización basada en la maximización de algún parámetro del resultado, capacidad para satisfacer los requisitos de forma mínima por parte de las alternativas, consultas a expertos o entidades autorizadas o incluso el azar. Sin embargo, al igual que el paso del estudio del individuo al grupo supone una nueva dimensión dentro la inteligencia colectiva la toma de decisiones colectiva supone un nuevo reto en todas las disciplinas relacionadas. Además, dentro de la decisión colectiva aparecen dos nuevos frentes: los sistemas de decisión centralizados y descentralizados. En el presente proyecto de tesis nos centraremos en este segundo, que es el que supone una mayor atractivo tanto por las posibilidades de generar nuevo conocimiento y trabajar con problemas abiertos actualmente así como en lo que respecta a la aplicabilidad de los resultados que puedan obtenerse. Ya por último, dentro del campo de los sistemas de decisión descentralizados existen varios mecanismos fundamentales que dan lugar a distintas aproximaciones a la problemática propia de este campo. Por ejemplo el liderazgo, la imitación, la prescripción o el miedo. Nosotros nos centraremos en uno de los más multidisciplinares y con mayor capacidad de aplicación en todo tipo de disciplinas y que, históricamente, ha demostrado que puede dar lugar a prestaciones muy superiores a otros tipos de mecanismos de decisión descentralizados: la confianza y la reputación. Resumidamente podríamos indicar que confianza es la creencia por parte de una entidad que otra va a realizar una determinada actividad de una forma concreta. En principio es algo subjetivo, ya que la confianza de dos entidades diferentes sobre una tercera no tiene porqué ser la misma. Por otro lado, la reputación es la idea colectiva (o evaluación social) que distintas entidades de un sistema tiene sobre otra entidad del mismo en lo que respecta a un determinado criterio. Es por tanto una información de carácter colectivo pero única dentro de un sistema, no asociada a cada una de las entidades del sistema sino por igual a todas ellas. En estas dos sencillas definiciones se basan la inmensa mayoría de sistemas colectivos. De hecho muchas disertaciones indican que ningún tipo de organización podría ser viable de no ser por la existencia y la utilización de los conceptos de confianza y reputación. A partir de ahora, a todo sistema que utilice de una u otra forma estos conceptos lo denominaremos como sistema de confianza y reputación (o TRS, Trust and Reputation System). Sin embargo, aunque los TRS son uno de los aspectos de nuestras vidas más cotidianos y con un mayor campo de aplicación, el conocimiento que existe actualmente sobre ellos no podría ser más disperso. Existen un gran número de trabajos científicos en todo tipo de áreas de conocimiento: filosofía, psicología, sociología, economía, política, tecnologías de la información, etc. Pero el principal problema es que no existe una visión completa de la confianza y reputación en su sentido más amplio. Cada disciplina focaliza sus estudios en unos aspectos u otros dentro de los TRS, pero ninguna de ellas trata de explotar el conocimiento generado en el resto para mejorar sus prestaciones en su campo de aplicación concreto. Aspectos muy detallados en algunas áreas de conocimiento son completamente obviados por otras, o incluso aspectos tratados por distintas disciplinas, al ser estudiados desde distintos puntos de vista arrojan resultados complementarios que, sin embargo, no son aprovechados fuera de dichas áreas de conocimiento. Esto nos lleva a una dispersión de conocimiento muy elevada y a una falta de reutilización de metodologías, políticas de actuación y técnicas de una disciplina a otra. Debido su vital importancia, esta alta dispersión de conocimiento se trata de uno de los principales problemas que se pretenden resolver con el presente trabajo de tesis. Por otro lado, cuando se trabaja con TRS, todos los aspectos relacionados con la seguridad están muy presentes ya que muy este es un tema vital dentro del campo de la toma de decisiones. Además también es habitual que los TRS se utilicen para desempeñar responsabilidades que aportan algún tipo de funcionalidad relacionada con el mundo de la seguridad. Por último no podemos olvidar que el acto de confiar está indefectiblemente unido al de delegar una determinada responsabilidad, y que al tratar estos conceptos siempre aparece la idea de riesgo, riesgo de que las expectativas generadas por el acto de la delegación no se cumplan o se cumplan de forma diferente. Podemos ver por lo tanto que cualquier sistema que utiliza la confianza para mejorar o posibilitar su funcionamiento, por su propia naturaleza, es especialmente vulnerable si las premisas en las que se basa son atacadas. En este sentido podemos comprobar (tal y como analizaremos en más detalle a lo largo del presente documento) que las aproximaciones que realizan las distintas disciplinas que tratan la violación de los sistemas de confianza es de lo más variado. únicamente dentro del área de las tecnologías de la información se ha intentado utilizar alguno de los enfoques de otras disciplinas de cara a afrontar problemas relacionados con la seguridad de TRS. Sin embargo se trata de una aproximación incompleta y, normalmente, realizada para cumplir requisitos de aplicaciones concretas y no con la idea de afianzar una base de conocimiento más general y reutilizable en otros entornos. Con todo esto en cuenta, podemos resumir contribuciones del presente trabajo de tesis en las siguientes. • La realización de un completo análisis del estado del arte dentro del mundo de la confianza y la reputación que nos permite comparar las ventajas e inconvenientes de las diferentes aproximación que se realizan a estos conceptos en distintas áreas de conocimiento. • La definición de una arquitectura de referencia para TRS que contempla todas las entidades y procesos que intervienen en este tipo de sistemas. • La definición de un marco de referencia para analizar la seguridad de TRS. Esto implica tanto identificar los principales activos de un TRS en lo que respecta a la seguridad, así como el crear una tipología de posibles ataques y contramedidas en base a dichos activos. • La propuesta de una metodología para el análisis, el diseño, el aseguramiento y el despliegue de un TRS en entornos reales. Adicionalmente se exponen los principales tipos de aplicaciones que pueden obtenerse de los TRS y los medios para maximizar sus prestaciones en cada una de ellas. • La generación de un software que permite simular cualquier tipo de TRS en base a la arquitectura propuesta previamente. Esto permite evaluar las prestaciones de un TRS bajo una determinada configuración en un entorno controlado previamente a su despliegue en un entorno real. Igualmente es de gran utilidad para evaluar la resistencia a distintos tipos de ataques o mal-funcionamientos del sistema. Además de las contribuciones realizadas directamente en el campo de los TRS, hemos realizado aportaciones originales a distintas áreas de conocimiento gracias a la aplicación de las metodologías de análisis y diseño citadas con anterioridad. • Detección de anomalías térmicas en Data Centers. Hemos implementado con éxito un sistema de deteción de anomalías térmicas basado en un TRS. Comparamos la detección de prestaciones de algoritmos de tipo Self-Organized Maps (SOM) y Growing Neural Gas (GNG). Mostramos como SOM ofrece mejores resultados para anomalías en los sistemas de refrigeración de la sala mientras que GNG es una opción más adecuada debido a sus tasas de detección y aislamiento para casos de anomalías provocadas por una carga de trabajo excesiva. • Mejora de las prestaciones de recolección de un sistema basado en swarm computing y odometría social. Gracias a la implementación de un TRS conseguimos mejorar las capacidades de coordinación de una red de robots autónomos distribuidos. La principal contribución reside en el análisis y la validación de las mejoras increméntales que pueden conseguirse con la utilización apropiada de la información existente en el sistema y que puede ser relevante desde el punto de vista de un TRS, y con la implementación de algoritmos de cálculo de confianza basados en dicha información. • Mejora de la seguridad de Wireless Mesh Networks contra ataques contra la integridad, la confidencialidad o la disponibilidad de los datos y / o comunicaciones soportadas por dichas redes. • Mejora de la seguridad de Wireless Sensor Networks contra ataques avanzamos, como insider attacks, ataques desconocidos, etc. Gracias a las metodologías presentadas implementamos contramedidas contra este tipo de ataques en entornos complejos. En base a los experimentos realizados, hemos demostrado que nuestra aproximación es capaz de detectar y confinar varios tipos de ataques que afectan a los protocoles esenciales de la red. La propuesta ofrece unas velocidades de detección muy altas así como demuestra que la inclusión de estos mecanismos de actuación temprana incrementa significativamente el esfuerzo que un atacante tiene que introducir para comprometer la red. Finalmente podríamos concluir que el presente trabajo de tesis supone la generación de un conocimiento útil y aplicable a entornos reales, que nos permite la maximización de las prestaciones resultantes de la utilización de TRS en cualquier tipo de campo de aplicación. De esta forma cubrimos la principal carencia existente actualmente en este campo, que es la falta de una base de conocimiento común y agregada y la inexistencia de una metodología para el desarrollo de TRS que nos permita analizar, diseñar, asegurar y desplegar TRS de una forma sistemática y no artesanal y ad-hoc como se hace en la actualidad. ABSTRACT By collective intelligence we understand a form of intelligence that emerges from the collaboration and competition of many individuals, or strictly speaking, many entities. Based on this simple definition, we can see how this concept is the field of study of a wide range of disciplines, such as sociology, information science or biology, each of them focused in different kinds of entities: human beings, computational resources, or animals. As a common factor, we can point that collective intelligence has always had the goal of being able of promoting a group intelligence that overcomes the individual intelligence of the basic entities that constitute it. This can be accomplished through different mechanisms such as coordination, cooperation, competence, integration, differentiation, etc. Collective intelligence has historically been developed in a parallel and independent way among the different disciplines that deal with it. However, this is not enough anymore due to the advances in information technologies. Nowadays, human beings and machines coexist in environments where collective intelligence has taken a new dimension: we yet have to achieve a better collective behavior than the individual one, but now we also have to deal with completely different kinds of individual intelligences. Therefore, we have a double goal: being able to deal with this heterogeneity and being able to get even more intelligent behaviors thanks to the synergies that the different kinds of intelligence can generate. Within the areas of collective intelligence there are several open topics where they always try to get better performances from groups than from the individuals. For example: collective consciousness, collective memory, or collective wisdom. Among all these topics we will focus on collective decision making, that has influence in most of the collective intelligent behaviors. The field of study of decision making is really wide, and its evolution has been completely parallel to the aforementioned collective intelligence. Firstly, it was focused on the individual as the main decision-making entity, but later it became involved in studying social and institutional groups as basic decision-making entities. The first studies within the decision-making discipline were based on simple paradigms, such as pros and cons analysis, criteria prioritization, fulfillment, following orders, or even chance. However, in the same way that studying the community instead of the individual meant a paradigm shift within collective intelligence, collective decision-making means a new challenge for all the related disciplines. Besides, two new main topics come up when dealing with collective decision-making: centralized and decentralized decision-making systems. In this thesis project we focus in the second one, because it is the most interesting based on the opportunities to generate new knowledge and deal with open issues in this area, as well as these results can be put into practice in a wider set of real-life environments. Finally, within the decentralized collective decision-making systems discipline, there are several basic mechanisms that lead to different approaches to the specific problems of this field, for example: leadership, imitation, prescription, or fear. We will focus on trust and reputation. They are one of the most multidisciplinary concepts and with more potential for applying them in every kind of environments. Besides, they have historically shown that they can generate better performance than other decentralized decision-making mechanisms. Shortly, we say trust is the belief of one entity that the outcome of other entities’ actions is going to be in a specific way. It is a subjective concept because the trust of two different entities in another one does not have to be the same. Reputation is the collective idea (or social evaluation) that a group of entities within a system have about another entity based on a specific criterion. Thus, it is a collective concept in its origin. It is important to say that the behavior of most of the collective systems are based on these two simple definitions. In fact, a lot of articles and essays describe how any organization would not be viable if the ideas of trust and reputation did not exist. From now on, we call Trust an Reputation System (TRS) to any kind of system that uses these concepts. Even though TRSs are one of the most common everyday aspects in our lives, the existing knowledge about them could not be more dispersed. There are thousands of scientific works in every field of study related to trust and reputation: philosophy, psychology, sociology, economics, politics, information sciences, etc. But the main issue is that a comprehensive vision of trust and reputation for all these disciplines does not exist. Every discipline focuses its studies on a specific set of topics but none of them tries to take advantage of the knowledge generated in the other disciplines to improve its behavior or performance. Detailed topics in some fields are completely obviated in others, and even though the study of some topics within several disciplines produces complementary results, these results are not used outside the discipline where they were generated. This leads us to a very high knowledge dispersion and to a lack in the reuse of methodologies, policies and techniques among disciplines. Due to its great importance, this high dispersion of trust and reputation knowledge is one of the main problems this thesis contributes to solve. When we work with TRSs, all the aspects related to security are a constant since it is a vital aspect within the decision-making systems. Besides, TRS are often used to perform some responsibilities related to security. Finally, we cannot forget that the act of trusting is invariably attached to the act of delegating a specific responsibility and, when we deal with these concepts, the idea of risk is always present. This refers to the risk of generated expectations not being accomplished or being accomplished in a different way we anticipated. Thus, we can see that any system using trust to improve or enable its behavior, because of its own nature, is especially vulnerable if the premises it is based on are attacked. Related to this topic, we can see that the approaches of the different disciplines that study attacks of trust and reputation are very diverse. Some attempts of using approaches of other disciplines have been made within the information science area of knowledge, but these approaches are usually incomplete, not systematic and oriented to achieve specific requirements of specific applications. They never try to consolidate a common base of knowledge that could be reusable in other context. Based on all these ideas, this work makes the following direct contributions to the field of TRS: • The compilation of the most relevant existing knowledge related to trust and reputation management systems focusing on their advantages and disadvantages. • We define a generic architecture for TRS, identifying the main entities and processes involved. • We define a generic security framework for TRS. We identify the main security assets and propose a complete taxonomy of attacks for TRS. • We propose and validate a methodology to analyze, design, secure and deploy TRS in real-life environments. Additionally we identify the principal kind of applications we can implement with TRS and how TRS can provide a specific functionality. • We develop a software component to validate and optimize the behavior of a TRS in order to achieve a specific functionality or performance. In addition to the contributions made directly to the field of the TRS, we have made original contributions to different areas of knowledge thanks to the application of the analysis, design and security methodologies previously presented: • Detection of thermal anomalies in Data Centers. Thanks to the application of the TRS analysis and design methodologies, we successfully implemented a thermal anomaly detection system based on a TRS.We compare the detection performance of Self-Organized- Maps and Growing Neural Gas algorithms. We show how SOM provides better results for Computer Room Air Conditioning anomaly detection, yielding detection rates of 100%, in training data with malfunctioning sensors. We also show that GNG yields better detection and isolation rates for workload anomaly detection, reducing the false positive rate when compared to SOM. • Improving the performance of a harvesting system based on swarm computing and social odometry. Through the implementation of a TRS, we achieved to improve the ability of coordinating a distributed network of autonomous robots. The main contribution lies in the analysis and validation of the incremental improvements that can be achieved with proper use information that exist in the system and that are relevant for the TRS, and the implementation of the appropriated trust algorithms based on such information. • Improving Wireless Mesh Networks security against attacks against the integrity, confidentiality or availability of data and communications supported by these networks. Thanks to the implementation of a TRS we improved the detection time rate against these kind of attacks and we limited their potential impact over the system. • We improved the security of Wireless Sensor Networks against advanced attacks, such as insider attacks, unknown attacks, etc. Thanks to the TRS analysis and design methodologies previously described, we implemented countermeasures against such attacks in a complex environment. In our experiments we have demonstrated that our system is capable of detecting and confining various attacks that affect the core network protocols. We have also demonstrated that our approach is capable of rapid attack detection. Also, it has been proven that the inclusion of the proposed detection mechanisms significantly increases the effort the attacker has to introduce in order to compromise the network. Finally we can conclude that, to all intents and purposes, this thesis offers a useful and applicable knowledge in real-life environments that allows us to maximize the performance of any system based on a TRS. Thus, we deal with the main deficiency of this discipline: the lack of a common and complete base of knowledge and the lack of a methodology for the development of TRS that allow us to analyze, design, secure and deploy TRS in a systematic way.
Resumo:
Multiple robot, single operator scenarios suppose a challenge in terms of human factors. Two relevant issues are keeping the situational awareness and managing the workload of operators. In order to address these problems, this work analyses the management of information and commands in multi-robot missions. About the information, this paper proposes a selection based on mission and operator states. Regarding the commands, this work reflects about the levels of automation and the methods of commanding.
Resumo:
The interest in missions with multiple Unmanned Aerial Vehicles (UAVs) has increased significantly in last years. These missions take advantage of the use of fleets instead of single UAVs to ensure the success, reduce the duration or increase the goals of the mission. In addition, they allow performing tasks that require multiple agents and certain coordination (e.g. surveillance of large areas or transport of heavy loads). Nevertheless, these missions suppose a challenge in terms of control and monitoring. In fact, the workload of the operators rises with the utilization of multiple UAVs and payloads, since they have to analyze more information, make more decisions and generate more commands during the mission. This work addresses the operator workload problem in multi-UAV missions by reducing and selecting the information. Two approaches are considered: a first one that selects the information according to the mission state, and a second one that selects it according to the operator preferences. The result is an interface that is able to control the amount of information and show what is relevant for mission and operator at the time.
Resumo:
A “Digital Divide” in information and technological literacy exists in Utah between small hospitals and clinics in rural areas and the larger health care institutions in the major urban area of the state. The goals of the outreach program of the Spencer S. Eccles Health Sciences Library at the University of Utah address solutions to this disparity in partnership with the National Network of Libraries of Medicine—Midcontinental Region, the Utah Department of Health, and the Utah Area Health Education Centers. In a circuit-rider approach, an outreach librarian offers classes and demonstrations throughout the state that teach information-access skills to health professionals. Provision of traditional library services to unaffiliated health professionals is integrated into the library's daily workload as a component of the outreach program. The paper describes the history, methodology, administration, funding, impact, and results of the program.
Resumo:
Cardiac myocytes express both constitutive and cytokine-inducible nitric oxide syntheses (NOS). NO and its congeners have been implicated in the regulation of cardiac contractile function. To determine whether NO could affect myocardial energetics, 31P NMR spectroscopy was used to evaluate high-energy phosphate metabolism in isolated rat hearts perfused with the NO donor S-nitrosoacetylcysteine (SNAC). All hearts were exposed to an initial high Ca2+ (3.5 mM) challenge followed by a recovery period, and then, either in the presence or absence of SNAC, to a second high Ca2+ challenge. This protocol allowed us to monitor simultaneously the effect of SNAC infusion on both contractile reserve (i.e., baseline versus high workload contractile function) and high-energy phosphate metabolism. The initial high Ca2+ challenge caused the rate-pressure product to increase by 74 +/- 5% in all hearts. As expected, ATP was maintained as phosphocreatine (PCr) content briefly dropped and then returned to baseline during the subsequent recovery period. Control hearts responded similarLy to the second high Ca2+ challenge, but SNAC-treated hearts did not demonstrate the expected increase in rate-pressure product. In these hearts, ATP declined significantly during the second high Ca2+ challenge, whereas phosphocreatine did not differ from controls, suggesting that phosphoryl transfer by creatine kinase (CK) was inhibited. CK activity, measured biochemically, was decreased by 61 +/- 13% in SNAC-treated hearts compared to controls. Purified CK in solution was also inhibited by SNAC, and reversal could be accomplished with DTT, a sulfhydryl reducing agent. Thus, NO can regulate contractile reserve, possibly by reversible nitrosothiol modification of CK.
Resumo:
Introdução: A formação dos profissionais da área da saúde é fundamental para a transformação das práticas de cuidado e consolidação dos princípios e diretrizes do Sistema Único de Saúde (SUS). Sendo um desafio do SUS, esta questão também está presente no campo da Saúde Mental e é necessária para a consolidação da Reforma Psiquiátrica e construção e fortalecimento da Rede de Atenção Psicossocial. Proposição: Investigar e refletir sobre as experiências dos estudantes que realizaram estágio no Centro de Atenção Psicossocial (CAPS) III Itaim Bibi entre 2009 e 2014, no tocante à formação profissional em Saúde Mental na perspectiva da Reforma Psiquiátrica. Materiais e Métodos: Estudo qualitativo, com construção dos dados a partir da leitura de relatórios dos estudantes e de questionários com perguntas referentes à experiência dos estágios, que foram apresentadas aos participantes conforme orientações do método Delphi. As questões abordaram: motivos; expectativas; forma e qualidade de participação nas atividades; temas e estudos; trabalho em equipe; situações vivenciadas; influência na atuação profissional; apresentação do estágio e sugestões de alterações. As informações foram trabalhadas por meio de Análise de Conteúdo Temática. Resultados: Dos 52 convidados, 28 participaram da primeira rodada (53,85%), sendo: 14 terapeutas ocupacionais, 9 enfermeiros, 3 psicólogos e 2 estudantes de Serviço Social. O segundo questionário foi composto por afirmativas presentes nas respostas recebidas para que os participantes as avaliassem conforme grau de concordância da escala Likert. Nesta fase foram recebidas 26 respostas. Conclusões: Apesar das dificuldades vivenciadas, avaliou-se que a maior parte das experiências dos estágios foi positiva e possibilitou aprendizagens significativas sobre o modelo de atenção psicossocial, o funcionamento e dinâmica da instituição, o trabalho em equipe interdisciplinar e as produções de convivência, principalmente aos sujeitos que realizaram estágios com maior carga horária. Identificaram como importantes aprendizados as experiências de acompanhamento individual e grupal dos usuários, a construção de Projeto Terapêutico Singular e de redes, o trabalho territorial e intersetorial. A participação em reuniões, supervisões clínico-institucionais, multiprofissionais e em oficinas de reflexão com as docentes das Universidades foi considerada importante para a formação. O aprendizado de manejo de situações de crise e de conflitos e de técnicas de contenção foi considerado superficial. Identificou-se que modelo de gestão e o trabalho da equipe influenciam no desenvolvimento de autonomia e protagonismo dos estagiários. O fortalecimento da integração ensino-serviço-comunidade é necessário e a flexibilização das propostas instituídas poderia facilitar a construção conjunta dos planos de estágios. Como produtos desta pesquisa foram elaboradas propostas de modificações para melhor organização dos estágios no CAPS e para a integração ensino-serviço e de Plano de Estágio Supervisionado em Terapia Ocupacional para os estágios extracurriculares. Realizou-se também uma Revisão Integrativa das publicações científicas brasileiras sobre a formação de estudantes de graduação em Saúde Mental na perspectiva da Reforma Psiquiátrica. Por fim, compreendeu-se que as experiências ressoam nas práticas profissionais dos graduados de modo positivo. Os participantes que não atuam neste campo, disseram levar consigo a experiência do trabalho em equipe e de formas éticas e humanizadas de cuidado.
Resumo:
Um das principais características da tecnologia de virtualização é a Live Migration, que permite que máquinas virtuais sejam movimentadas entre máquinas físicas sem a interrupção da execução. Esta característica habilita a implementação de políticas mais sofisticadas dentro de um ambiente de computação na nuvem, como a otimização de uso de energia elétrica e recursos computacionais. Entretanto, a Live Migration pode impor severa degradação de desempenho nas aplicações das máquinas virtuais e causar diversos impactos na infraestrutura dos provedores de serviço, como congestionamento de rede e máquinas virtuais co-existentes nas máquinas físicas. Diferente de diversos estudos, este estudo considera a carga de trabalho da máquina virtual um importante fator e argumenta que escolhendo o momento adequado para a migração da máquina virtual pode-se reduzir as penalidades impostas pela Live Migration. Este trabalho introduz a Application-aware Live Migration (ALMA), que intercepta as submissões de Live Migration e, baseado na carga de trabalho da aplicação, adia a migração para um momento mais favorável. Os experimentos conduzidos neste trabalho mostraram que a arquitetura reduziu em até 74% o tempo das migrações para os experimentos com benchmarks e em até 67% os experimentos com carga de trabalho real. A transferência de dados causada pela Live Migration foi reduzida em até 62%. Além disso, o presente introduz um modelo que faz a predição do custo da Live Migration para a carga de trabalho e também um algoritmo de migração que não é sensível à utilização de memória da máquina virtual.
Resumo:
Objetivo: Determinar si la ENS y la EPA de 2006 producen la misma información sobre labores del hogar y doble carga de trabajo en la población de 25 a 64 años, en ambos sexos. Métodos: Comparación entre las ENS y EPA sobre la forma de recoger información de la doble carga de trabajo. Fuente: Preguntas ENS: actividad económica (C.1.2:categorías 1,2,6), dedicación labores del hogar (A.11:categorías 1,2,3). EPA: actividad económica (H.1:categorías 1,5). Descripción por sexo en España y Comunidades Autónomas (CC.AA). Resultados: El 43,4% de las mujeres según la EPA tienen doble carga de trabajo, pero solo un 0,7% según la ENS. En los hombres el 31,5% (EPA) y el 0,02% (ENS). Alternativamente, cruzando a quienes afirman trabajar (C.1.2:categorías 1,2) con quienes realizan labores del hogar (A.11:categorías 1,2,3), la doble carga de ambas encuestas se aproxima (Hombres: ENS:31,7%; EPA:31,5%; Mujeres: ENS:46,3%; EPA:43,4%). Ambas encuestas ordenan de forma similar a las CC.AA según la doble carga de trabajo (ρmujeres:0,770 (p=0,001); ρhombres:0,647 (p=0,003)). Conclusión: La pregunta de actividad económica de la ENS subestima la frecuencia de la doble carga de trabajo. Esta es parecida en ambas encuestas, si se cruzan los datos de quienes afirman trabajar con quienes realizan labores del hogar de la ENS. En este caso, ambas encuestas ordenan de igual forma a las CC.AA. La exclusión del adverbio «principalmente» de la categoría sobre dedicación a las labores del hogar de la ENS 2011 normalizará la pregunta sobre actividad económica respecto a las utilizadas en encuestas de salud internacionales y de CC.AA.
Resumo:
The adaptation of the Spanish University to the European Higher Education Area (EEES in Spanish) demands the integration of new tools and skills that would make the teaching- learning process easier. This adaptation involves a change in the evaluation methods, which goes from a system where the student was evaluated with a final exam, to a new system where we include a continuous evaluation in which the final exam may represent at most 50% in the vast majority of the Universities. Devising a new and fair continuous evaluation system is not an easy task to do. That would mean a student’s’ learning process follow-up by the teachers, and as a consequence an additional workload on existing staff resources. Traditionally, the continuous evaluation is associated with the daily work of the student and a collection of the different marks partly or entirely based on the work they do during the academic year. Now, small groups of students and an attendance control are important aspects to take into account in order to get an adequate assessment of the students. However, most of the university degrees have groups with more than 70 students, and the attendance control is a complicated task to perform, mostly because it consumes significant amounts of staff time. Another problem found is that the attendance control would encourage not-interested students to be present at class, which might cause some troubles to their classmates. After a two year experience in the development of a continuous assessment in Statistics subjects in Social Science degrees, we think that individual and periodical tasks are the best way to assess results. These tasks or examinations must be done in classroom during regular lessons, so we need an efficient system to put together different and personal questions in order to prevent students from cheating. In this paper we provide an efficient and effective way to elaborate random examination papers by using Sweave, a tool that generates data, graphics and statistical calculus from the software R and shows results in PDF documents created by Latex. In this way, we will be able to design an exam template which could be compiled in order to generate as many PDF documents as it is required, and at the same time, solutions are provided to easily correct them.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
Travail dirigé présenté en vue de l’obtention du grade de maîtrise en criminologie option sécurité intérieure
Resumo:
O presente trabalho teve como objetivo analisar a inserção das Técnicas de Serviço Social na área da senioridade. Como se trata de um estudo de caso restringe-se concelho da Covilhã. Por isso, foi realizado um estudo que tinha quatro vertentes: as Assistentes Sociais, as Instituições Particulares de Solidariedade Social e o concelho da Covilhã. No sentido de tomar mais consistente a pesquisa, foi desenvolvido um guião de entrevista e aplicado a várias profissionais inseridas nas Ipss com três valências (ERPI, SAD e CD) no concelho. Posteriormente foram analisados os resultados e comparados com dados recolhidos dos Censos, do Instituto nacional de Estatística e Pordata. O tema é bastante atual e pertinente, devido ao panorama nacional. Por um lado o aumento da Taxas de desemprego e emigração, e por outro o envelhecimento da população e o aumento dos casos sociais. Concluiu-se que, para as entrevistadas a maior dificuldade no desempenho das suas funções é a excessiva carga horária, os problemas diários associados aos idosos (Dependência física e mental, o Luto, condições desumanas) no entanto revelaram alguma facilidade e rapidez na obtenção do primeiro emprego.
Resumo:
BACKGROUND Overtreatment of asymptomatic bacteriuria (ASB) is widespread and may result in antibiotic side-effects, excess costs to the healthcare system, and may potentially trigger antimicrobial resistance. According to international management guidelines, ASB is not an indication for antibiotic treatment (with few exceptions). AIM To determine reasons for using antibiotics to treat ASB in the absence of a treatment indication. METHODS A qualitative study was conducted at a tertiary care hospital in Switzerland during 2011. We interviewed 21 internal medicine residents and attending physicians selected by purposive sampling, using a semi-structured questionnaire. Responses were analysed in an inductive thematic content approach using dedicated software (MAXQDA(®)). FINDINGS In the 21 interviews, the following thematic rationales for antibiotic overtreatment of ASB were reported (in order of reporting frequency): (i) treating laboratory findings without taking the clinical picture into account (N = 17); (ii) psychological factors such as anxiety, overcautiousness, or anticipated positive impact on patient outcomes (N = 13); (iii) external pressors such as institutional culture, peer pressure, patient expectation, and excessive workload that interferes with proper decision-making (N = 9); (iv) difficulty with interpreting clinical signs and symptoms (N = 8). CONCLUSION In this qualitative study we identified both physician-centred factors (e.g. overcautiousness) and external pressors (e.g. excessive workload) as motivators for prescribing unnecessary antibiotics. Also, we interpreted the frequently cited practice of treating asymptomatic patients based on laboratory findings alone as lack of awareness of evidence-based best practices.
Resumo:
Travail dirigé présenté en vue de l’obtention du grade de maîtrise en criminologie option sécurité intérieure