953 resultados para pacs: distributed system software


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Through the use of Cloud Foundry "stack" concept, a new isolation is provided to the application running on the PaaS. A new deployment feature that can easily scale on distributed system, both public and private clouds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dall'inizio del nuovo millennio lo sviluppo di tecnologie nel campo del mobile computing, della rete internet, lo sviluppo dell'Internet of things e pure il cloud computing hanno reso possibile l'innovazione dei metodi di lavoro e collaborazione. L'evoluzione del mobile computing e della realtà aumentata che sta avvenendo in tempi più recenti apre potenzialmente nuovi orizzonti nello sviluppo di sistemi distribuiti collaborativi. Esistono oggi diversi framework a supporto della realtà aumentata, Wikitude, Metaio, Layar, ma l'interesse primario di queste librerie è quello di fornire una serie di API fondamentali per il rendering di immagini 3D attraverso i dispositivi, per lo studio dello spazio in cui inserire queste immagini e per il riconoscimento di marker. Questo tipo di funzionalità sono state un grande passo per quanto riguarda la Computer Graphics e la realtà aumentata chiaramente, però aprono la strada ad una Augmented Reality(AR) ancora più aumentata. Questa tesi si propone proprio di presentare l'ideazione, l'analisi, la progettazione e la prototipazione di un sistema distribuito situato a supporto della collaborazione basato su realtà aumentata. Lo studio di questa applicazione vuole mettere in luce molti aspetti innovativi e che ancora oggi non sono stati approfonditi né tanto meno sviluppati come API o forniti da librerie riguardo alla realtà aumentata e alle sue possibili applicazioni.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Localization is information of fundamental importance to carry out various tasks in the mobile robotic area. The exact degree of precision required in the localization depends on the nature of the task. The GPS provides global position estimation but is restricted to outdoor environments and has an inherent imprecision of a few meters. In indoor spaces, other sensors like lasers and cameras are commonly used for position estimation, but these require landmarks (or maps) in the environment and a fair amount of computation to process complex algorithms. These sensors also have a limited field of vision. Currently, Wireless Networks (WN) are widely available in indoor environments and can allow efficient global localization that requires relatively low computing resources. However, the inherent instability in the wireless signal prevents it from being used for very accurate position estimation. The growth in the number of Access Points (AP) increases the overlap signals areas and this could be a useful means of improving the precision of the localization. In this paper we evaluate the impact of the number of Access Points in mobile nodes localization using Artificial Neural Networks (ANN). We use three to eight APs as a source signal and show how the ANNs learn and generalize the data. Added to this, we evaluate the robustness of the ANNs and evaluate a heuristic to try to decrease the error in the localization. In order to validate our approach several ANNs topologies have been evaluated in experimental tests that were conducted with a mobile node in an indoor space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das Web 2.0 eröffnet Wissenschaftlerinnen und Wissenschaftlern neue Möglichkeiten mit Wissen und Informationen umzugehen: Das Recherchieren von Informationen und Quellen, der Austausch von Wissen mit anderen, das Verwalten von Ressourcen und das Erstellen von eigenen Inhalten im Web ist einfach und kostengünstig möglich. Dieser Artikel thematisiert die Bedeutung des Web 2.0 für den Umgang mit Wissen und Informationen und zeigt auf, wie durch die Kooperation vieler Einzelner das Schaffen von neuem Wissen und von Innovationen möglich wird. Diskutiert werden der Einfluss des Web 2.0 auf die Wissenschaft und mögliche Vor- und Nachteile der Nutzung. Außerdem wird ein kurzer Überblick über Studien gegeben, die die Nutzung des Web 2.0 in der Gesamtbevölkerung untersuchen. Im empirischen Teil des Artikels werden Methode und Ergebnisse der Befragungsstudie „Wissenschaftliches Arbeiten im Web 2.0“ vorgestellt. Befragt wurden Nachwuchswissenschaftlerinnen und Nachwuchswissenschaftler in Deutschland zur Nutzung des Web 2.0 für die eigene wissenschaftliche Arbeit. Dabei zeigt sich, dass insbesondere die Wikipedia von einem Großteil der Befragten intensiv bis sehr intensiv für den Einstieg in die Recherche verwendet wird. Die aktive Nutzung des Web 2.0, z.B. durch das Schreiben eines eigenen Blogs oder dem Mitarbeiten bei der Online-Enzyklopädie Wikipedia ist bis jetzt noch gering. Viele Dienste sind unbekannt oder werden eher skeptisch beurteilt, der lokale Desktopcomputer wurde noch nicht vom Web als zentraler Speicherort abgelöst.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eine neue Forschungs-Plattform bietet erstmals eine Kombination aus privatem Daten- und Literaturmanagement mit sozialen Werkzeugen wie Wikis, Projektmanagement und Netzwerk. „scholarz.net“ überträgt die Erfolgsprinzipien des Web 2.0 auf die Forschung und macht die neuen technischen Möglichkeiten für die Wissenschaft fruchtbar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergence of cloud datacenters enhances the capability of online data storage. Since massive data is stored in datacenters, it is necessary to effectively locate and access interest data in such a distributed system. However, traditional search techniques only allow users to search images over exact-match keywords through a centralized index. These techniques cannot satisfy the requirements of content based image retrieval (CBIR). In this paper, we propose a scalable image retrieval framework which can efficiently support content similarity search and semantic search in the distributed environment. Its key idea is to integrate image feature vectors into distributed hash tables (DHTs) by exploiting the property of locality sensitive hashing (LSH). Thus, images with similar content are most likely gathered into the same node without the knowledge of any global information. For searching semantically close images, the relevance feedback is adopted in our system to overcome the gap between low-level features and high-level features. We show that our approach yields high recall rate with good load balance and only requires a few number of hops.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impedance-based stability-assessment method has turned out to be a very effective tool and its usage is rapidly growing in different applications ranging from the conventional interconnected dc/dc systems to the grid-connected renewable energy systems. The results are sometime given as a certain forbidden region in the complex plane out of which the impedance ratio--known as minor-loop gain--shall stay for ensuring robust stability. This letter discusses the circle-like forbidden region occupying minimum area in the complex plane, defined by applying maximum peak criteria, which is well-known theory in control engineering. The investigation shows that the circle-like forbidden region will ensure robust stability only if the impedance-based minor-loop gain is determined at the very input or output of each subsystem within the interconnected system. Experimental evidence is provided based on a small-scale dc/dc distributed system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de este proyecto es el estudio de soluciones de escalabilidad y alta disponibilidad en sistemas distribuidos, así como su implantación en aquel de los sistemas analizados por Telefónica Digital, PopBox y Rush, que se consideré más adecuado. Actualmente, muchos servicios y aplicaciones están alojados directamente en laWeb, permitiendo abaratar el uso de ciertos servicios y mejorando la productividad y la competitividad de las empresas que los usan. Este crecimiento de las tecnologías en cloud experimentado en los últimos años plantea la necesidad de realizar sistemas que sean escalables, fiables y estén disponibles la mayor parte del tiempo posible. Un fallo en el servicio no afecta a una sola empresa, sino a todas las que están haciendo uso de dicho servicio. A lo largo de este proyecto se estudiarán las soluciones de alta disponibilidad y escalabilidad implementadas en varios sistemas distribuidos y se realizará una evaluación crítica de cada una de ellas. También se analizará la idoneidad de estas soluciones para los sistemas en los que posteriormente se aplicarán: PopBox y Rush. Se han diseñado diferentes soluciones para las plataformas implicadas, siguiendo varias aproximaciones y realizando un análisis exhaustivo de cada una de ellas, teniendo en cuenta el rendimiento y fiabilidad de cada aproximación. Una vez se ha determinado cuál es la estrategia más adecuada, se ha realizado una implementación fiable del sistema. Para cada uno de los módulos implementados se ha llevado a cabo una fase de testing unitario y de integración para asegurar el buen comportamiento del sistema y la integridad de éste cuando se realicen cambios. Específicamente, los objetivos que se alcanzarán son los siguientes: 1. Análisis exhaustivo de los sistemas de escalabilidad y alta escalabilidad que existen actualmente. 2. Diseño de una solución general HA1 y escalable teniendo en cuenta el objetivo anterior. 3. Análisis de la idoneidad de los sistemas PopBox y Rush para el diseño de un entorno distribuido escalable. 4. Diseño e implantación de una solución ad-hoc en el sistema elegido. ---ABSTRACT---The aim of this project is the study of solutions in scalability and high availability in distributed systems, and also its implementation in one of the systems developed y Telefónica I+D, PopBox and Rush, deemed more suitable. Nowadays, a lot of services and applications are stored directly in the Web, allowing companies to reduce the costs of using certain services and improving the productivity and competitiveness of those who use these services. This increase of the use of cloud tecnologies experimented in the last few years has led to the need of developing high available, scalable, and reliable systems. A failure in the service does not affect a single company but all the companies using this service. Throughout this project, I will study several solutions in High Availability and Scalability developed in some distributed systems and I will make a critic analysis of each one. Also I will analize the suitability of these solutions in the systems in which they will be applied: PopBox and Rush. I have designed different solutions for the platforms involved, following several approaches and making an exhaustive analysis of each one, taking into account their performance and reliability of each approach. Once I had determined which is the best strategy, I have developed a reliable implementation of the system. For each module implemented, I have carried out a set of unitary and integration tests to ensure the good behaviour of the system and the integrity of it when it changes. Specifically, the objectives to be achieved are as follows: 1. Exhaustive analysis of the systems in scalability and high availability that currently exist. 2. Design of a general solution taking into account the previous point. 3. Analysis of the suitability of the sistems PopBox and Rush for the design of a scalable distributed system. 4. Design and implementation of an ad-hoc solution in the chosen system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El paradigma de procesamiento de eventos CEP plantea la solución al reto del análisis de grandes cantidades de datos en tiempo real, como por ejemplo, monitorización de los valores de bolsa o el estado del tráfico de carreteras. En este paradigma los eventos recibidos deben procesarse sin almacenarse debido a que el volumen de datos es demasiado elevado y a las necesidades de baja latencia. Para ello se utilizan sistemas distribuidos con una alta escalabilidad, elevado throughput y baja latencia. Este tipo de sistemas son usualmente complejos y el tiempo de aprendizaje requerido para su uso es elevado. Sin embargo, muchos de estos sistemas carecen de un lenguaje declarativo de consultas en el que expresar la computación que se desea realizar sobre los eventos recibidos. En este trabajo se ha desarrollado un lenguaje declarativo de consultas similar a SQL y un compilador que realiza la traducción de este lenguaje al lenguaje nativo del sistema de procesamiento masivo de eventos. El lenguaje desarrollado en este trabajo es similar a SQL, con el que se encuentran familiarizados un gran número de desarrolladores y por tanto aprender este lenguaje no supondría un gran esfuerzo. Así el uso de este lenguaje logra reducir los errores en ejecución de la consulta desplegada sobre el sistema distribuido al tiempo que se abstrae al programador de los detalles de este sistema.---ABSTRACT---The complex event processing paradigm CEP has become the solution for high volume data analytics which demand scalability, high throughput, and low latency. Examples of applications which use this paradigm are financial processing or traffic monitoring. A distributed system is used to achieve the performance requisites. These same requisites force the distributed system not to store the events but to process them on the fly as they are received. These distributed systems are complex systems which require a considerably long time to learn and use. The majority of such distributed systems lack a declarative language in which to express the computation to perform over incoming events. In this work, a new SQL-like declarative language and a compiler have been developed. This compiler translates this new language to the distributed system native language. Due to its similarity with SQL a vast amount of developers who are already familiar with SQL will need little time to learn this language. Thus, this language reduces the execution failures at the time the programmer no longer needs to know every single detail of the underlying distributed system to submit a query.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regional cerebral blood flow was measured with positron-emission tomography during two encoding and two retrieval tasks that were designed to compare memory for object features with memory for object locations. Bilateral increases in regional cerebral blood flow were observed in both anterior and posterior regions of inferior temporal cortex and in ventral regions of prestriate cortex, when the condition that required retrieval of object locations was subtracted from the condition that required retrieval of object features. During encoding, these changes were less pronounced and were restricted to the left inferior temporal cortex and right ventral prestriate cortex. In contrast, both encoding and retrieval of object location were associated with bilateral changes in dorsal prestriate and posterior parietal cortex. Finally, the two encoding conditions activated left frontal lobe regions preferentially, whereas the two retrieval conditions activated right frontal lobe regions. These findings confirm that, in human subjects, memory for object features is mediated by a distributed system that includes ventral prestriate cortex and both anterior and posterior regions of the inferior temporal gyrus. In contrast, memory for the locations of objects appears to be mediated by an anatomically distinct system that includes more dorsal regions of prestriate cortex and posterior regions of the parietal lobe.