916 resultados para Distributed virtual environments (DVE)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effects on fish reproduction can result from a variety of toxicity mechanisms first operating at the molecular level. Notably, the presence in the environment of some compounds termed endocrine disrupting chemicals (EDCs) can cause adverse effects on reproduction by interfering with the endocrine system. In some cases, exposure to EDCs leads to the animal feminization and male fish may develop oocytes in testis (intersex condition). Mugilid fish are well suited sentinel organisms to study the effects of reproductive EDCs in the monitoring of estuarine/marine environments. Up-regulation of aromatases and vitellogenins in males and juveniles and the presence of intersex individuals have been described in a wide array of mullet species worldwide. There is a need to develop new molecular markers to identify early feminization responses and intersex condition in fish populations, studying mechanisms that regulate gonad differentiation under exposure to xenoestrogens. Interestingly, an electrophoresis of gonad RNA, shows a strong expression of 5S rRNA in oocytes, indicating the potential of 5S rRNA and its regulating proteins to become useful molecular makers of oocyte presence in testis. Therefore, the use of these oocyte markers to sex and identify intersex mullets could constitute powerful molecular biomarkers to assess xenoestrogenicity in field conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES]Este proyecto tiene como objeto aumentar el conocimiento concerniente a mecanismos robóticos reconfigurables, así como ponerlo en práctica. Estos mecanismos pueden lograr rápidas transiciones y son capaces de adaptarse a sí mismos a muchos entornos diferentes, conduciendo a una reducción de costes y requerimientos de espacio. Para ello, se estudia el estado del arte, de manera que se pueda reunir información sobre las principales aplicaciones y oportunidades que este campo ofrece en diferentes áreas. A continuación, se requiere llevar a cabo un análisis cinemático de un robot específico, y junto a métodos de planificación de trayectorias, su implementación en un software gráfico para simular su movimiento. La herramienta de software “Matlab” va a ser la que permitirá llevar a cabo toda la programación y representación a lo largo de todo el proyecto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 mu m). In the case of surface finish, the absolute error is well below R-a 1 mu m (average value 0.32 mu m). The present approach can be easily generalized to other grinding operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service provisioning in assisted living environments faces distinct challenges due to the heterogeneity of networks, access technology, and sensing/actuation devices in such an environment. Existing solutions, such as SOAP-based web services, can interconnect heterogeneous devices and services, and can be published, discovered and invoked dynamically. However, it is considered heavier than what is required in the smart environment-like context and hence suffers from performance degradation. Alternatively, REpresentational State Transfer (REST) has gained much attention from the community and is considered as a lighter and cleaner technology compared to the SOAP-based web services. Since it is simple to publish and use a RESTful web service, more and more service providers are moving toward REST-based solutions, which promote a resource-centric conceptualization as opposed to a service-centric conceptualization. Despite such benefits of REST, the dynamic discovery and eventing of RESTful services are yet considered a major hurdle to utilization of the full potential of REST-based approaches. In this paper, we address this issue, by providing a RESTful discovery and eventing specification and demonstrate it in an assisted living healthcare scenario. We envisage that through this approach, the service provisioning in ambient assisted living or other smart environment settings will be more efficient, timely, and less resource-intensive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho relata as estratégias e atividades realizadas em disciplinas semipresenciais desenvolvidas durante os períodos letivos de 2008 a 2012 com alunos dos cursos de Administração e de Ciências Contábeis, no ambiente virtual de aprendizagem Moodle. As disciplinas foram desenvolvidas segundo os princípios das abordagens colaborativas de aprendizagem com o objetivo de examinar as possibilidades de uso dos insólitos como estratégia de leitura e escrita. Buscou-se ainda apontar a aplicabilidade das estratégias didáticas descritas como forma de aprimorar as competências de leitura e escrita em alunos ingressantes no ensino superior e fornecer subsídios para a continuação da pesquisa. O trabalho mantém uma relação interdiscursiva com a obra Se um Viajante numa Noite de Inverno, de ítalo Calvino (1982), o que lhe possibilita não somente a titulação dos capítulos, mas também a construção sutil de uma presença que os perpassa. Do autor, retira também seis propostas ( leveza, rapidez, exatidão, visibilidade, multiplicidade e consistência) capazes de aprimorar a qualidade da comunicação em ambientes informáticos. Busca, ainda, na produção teórica de Michael Serres, um conceito singular de comunicação, algo capaz de transcender a substancialidade e de compreender e estimular a construção da presencialidade por meio de trocas e relações em ambientes virtuais de aprendizagem. Em vista disso, a pesquisa apoia-se na construção -reflexão- reconstrução de oficinas on-line que utilizam o insólito - concebido como algo surpreendente e propiciador de desestabilização - na construção de estratégias propiciadoras de aprimoramento da leitura e da produção textual de estudantes universitários. Recorre também às pesquisas desenvolvidas por Mikhail Bakhtin, Ângela Kleiman, Carla Coscarelli e Vilson Leffa, contribuições decisivas tanto na elaboração das oficinas on-line, quanto na reflexão que se tece ao longo da pesquisa. Por fim, busca apoio nos estudos sobre Estilos de Aprendizagem e na aplicação do Teste de Cloze para o aprimoramento das reflexões construídas

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Co-CreativePen Toolkit is a pen-based 3D toolkit for children cooperatly designing virtual environment. This toolkit is used to construct different applications involved with distributedpen-based 3D interaction. In this toolkit,sketch method is encapsulated as kinds of interaction techniques. Children can use pen to construct 3D and IBR objects, to navigate in the virtual world, to select and manipulate virtual objects, and to communicate with other children. Children can use pen to select other children in the virtual world, and use pen to write message to children selected The distributed architecture of Co-CreativePen Toolkit is based on the CORBA. A common scene graph is managed in the server with several copies of this graph are managed in every client.Every changes of the scene graph in client will cause the change in the server and other client.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on analyses of more than 600 surface sediment samples together with large amounts of previous sedimentologic and hydrologic data, the characteristics of modern sedimentary environments and dynamic depositional systems in the southern Yellow Sea (SYS) are expounded, and the controversial formation mechanism of muddy sediments is also discussed. The southern Yellow Sea shelf can be divided into low-energy sedimentary environment and high-energy sedimentary environment; the low-energy sedimentary environment can be further divided into cyclonic and anticyclonic ones, and the high-energy environment is subdivided into high-energy depositional and eroded environments. In the shelf low-energy environments, there developed muddy depositional system. In the central part of the southern Yellow Sea, there deposited the cold eddy sediments under the actions of a meso-scale cyclonic eddy (cold eddy), and in the southeast of the southern Yellow Sea, an anticyclonic eddy muddy depositional system (warm eddy sediment) was formed. These two types of sediments showed evident differences in grain size, sedimentation rate, sediment thickness and mineralogical characteristics. The high-energy environments were covered with sandy sediments on seabed; they appeared mainly in the west, south and northeast of the southern Yellow Sea. In the high-energy eroded environment, large amounts of sandstone gravels were distributed on seabed. In the high-energy depositional environment, the originally deposited fine materials (including clay and fine silt) were gradually re-suspended and then transported to a low-energy area to deposit again. In this paper, the sedimentation model of cyclonic and anticyclonic types of muddy sediments is established, and a systematic interpretation for the formation cause of muddy depositional systems in the southern Yellow Sea is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increased use of "Virtual Machines" (VMs) as vehicles that isolate applications running on the same host, it is necessary to devise techniques that enable multiple VMs to share underlying resources both fairly and efficiently. To that end, one common approach is to deploy complex resource management techniques in the hosting infrastructure. Alternately, in this paper, we advocate the use of self-adaptation in the VMs themselves based on feedback about resource usage and availability. Consequently, we define a "Friendly" VM (FVM) to be a virtual machine that adjusts its demand for system resources, so that they are both efficiently and fairly allocated to competing FVMs. Such properties are ensured using one of many provably convergent control rules, such as AIMD. By adopting this distributed application-based approach to resource management, it is not necessary to make assumptions about the underlying resources nor about the requirements of FVMs competing for these resources. To demonstrate the elegance and simplicity of our approach, we present a prototype implementation of our FVM framework in User-Mode Linux (UML)-an implementation that consists of less than 500 lines of code changes to UML. We present an analytic, control-theoretic model of FVM adaptation, which establishes convergence and fairness properties. These properties are also backed up with experimental results using our prototype FVM implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Open environments involve distributed entities interacting with each other in an open manner. Many distributed entities are unknown to each other but need to collaborate and share resources in a secure fashion. Usually resource owners alone decide who is trusted to access their resources. Since resource owners in open environments do not have a complete picture of all trusted entities, trust management frameworks are used to ensure that only authorized entities will access requested resources. Every trust management system has limitations, and the limitations can be exploited by malicious entities. One vulnerability is due to the lack of globally unique interpretation for permission specifications. This limitation means that a malicious entity which receives a permission in one domain may misuse the permission in another domain via some deceptive but apparently authorized route; this malicious behaviour is called subterfuge. This thesis develops a secure approach, Subterfuge Safe Trust Management (SSTM), that prevents subterfuge by malicious entities. SSTM employs the Subterfuge Safe Authorization Language (SSAL) which uses the idea of a local permission with a globally unique interpretation (localPermission) to resolve the misinterpretation of permissions. We model and implement SSAL with an ontology-based approach, SSALO, which provides a generic representation for knowledge related to the SSAL-based security policy. SSALO enables integration of heterogeneous security policies which is useful for secure cooperation among principals in open environments where each principal may have a different security policy with different implementation. The other advantage of an ontology-based approach is the Open World Assumption, whereby reasoning over an existing security policy is easily extended to include further security policies that might be discovered in an open distributed environment. We add two extra SSAL rules to support dynamic coalition formation and secure cooperation among coalitions. Secure federation of cloud computing platforms and secure federation of XMPP servers are presented as case studies of SSTM. The results show that SSTM provides robust accountability for the use of permissions in federation. It is also shown that SSAL is a suitable policy language to express the subterfuge-safe policy statements due to its well-defined semantics, ease of use, and integrability.