838 resultados para network congestion control


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differential geometry is used to investigate the structure of neural-network-based control systems. The key aspect is relative order—an invariant property of dynamic systems. Finite relative order allows the specification of a minimal architecture for a recurrent network. Any system with finite relative order has a left inverse. It is shown that a recurrent network with finite relative order has a local inverse that is also a recurrent network with the same weights. The results have implications for the use of recurrent networks in the inverse-model-based control of nonlinear systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a new model-based proportional–integral–derivative (PID) tuning and controller approach is introduced for Hammerstein systems that are identified on the basis of the observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a B-spline neural network. The control signal is composed of a PID controller, together with a correction term. Both the parameters in the PID controller and the correction term are optimized on the basis of minimizing the multistep ahead prediction errors. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on B-spline neural networks and the associated Jacobian matrix are calculated using the de Boor algorithms, including both the functional and derivative recursions. Numerical examples are utilized to demonstrate the efficacy of the proposed approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective for this degree project was to analyze the Endpoint Security Solutions developed by Cisco, Microsoft and a third minor company solution represented by InfoExpress. The different solutions proposed are Cisco Network Admission Control, Microsoft Network Access Protection and InfoExpress CyberGatekeeper. An explanation of each solution functioning is proposed as well as an analysis of the differences between those solutions. This thesis work also proposes a tutorial for the installation of Cisco Network Admission Control for an easier implementation. The research was done by reading articles on the internet and by experimenting the Cisco Network Admission Control solution. My background knowledge about Cisco routing and ACL was also used. Based on the actual analysis done in this thesis, a conclusion was drawn that all existing solutions are not yet ready for large-scale use in corporate networks. Moreover all solutions are proprietary and incompatible. The future possible standard for Endpoint solution might be driven by Cisco and Microsoft and a rude competition begins between those two giants.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Existe uma dissonância entre a teoria dominante de competição entre telefonias e evidências empíricas. Aquela tem como resultado que as redes de telefonia móvel irão definir a tarifa de interconexão abaixo do custo marginal de término da ligação. Já evidências empíricas diversas mostram que as tarifas de interconexão das telefonias móveis são mais elevadas e que as agências reguladoras encontram resistência destas ao aplicarem políticas de redução das tarifas de interconexão. Este trabalho desenvolve um modelo, baseado em Hoernig (2010), que provê resultados mais aderentes às evidências de existência de incentivos para precificação de tarifas de interconexão acima do custo marginal. O modelo aqui proposto inova em relação a Hoernig (2010) ao assumir que as redes de telefonia móvel concorrem com a telefonia fixa, a qual é sujeita à regulação da tarifa de interconexão. Esta é uma representação bastante plausível frente ao desenvolvimento da telefonia móvel. O modelo também considera o efeito de uma das empresas de telefonia móvel ter o seu controle compartilhado com a de telefonia fixa. Devido ao pressuposto de competição em um mesmo mercado entre telefonia fixa e móvel, é encontrado como resultado geral que as redes de telefonia móvel irão definir a tarifa de interconexão acima do custo marginal de término da ligação.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A pesquisa tem como objetivo desenvolver uma estrutura de controle preditivo neural, com o intuito de controlar um processo de pH, caracterizado por ser um sistema SISO (Single Input - Single Output). O controle de pH é um processo de grande importância na indústria petroquímica, onde se deseja manter constante o nível de acidez de um produto ou neutralizar o afluente de uma planta de tratamento de fluidos. O processo de controle de pH exige robustez do sistema de controle, pois este processo pode ter ganho estático e dinâmica nãolineares. O controlador preditivo neural envolve duas outras teorias para o seu desenvolvimento, a primeira referente ao controle preditivo e a outra a redes neurais artificiais (RNA s). Este controlador pode ser dividido em dois blocos, um responsável pela identificação e outro pelo o cálculo do sinal de controle. Para realizar a identificação neural é utilizada uma RNA com arquitetura feedforward multicamadas com aprendizagem baseada na metodologia da Propagação Retroativa do Erro (Error Back Propagation). A partir de dados de entrada e saída da planta é iniciado o treinamento offline da rede. Dessa forma, os pesos sinápticos são ajustados e a rede está apta para representar o sistema com a máxima precisão possível. O modelo neural gerado é usado para predizer as saídas futuras do sistema, com isso o otimizador calcula uma série de ações de controle, através da minimização de uma função objetivo quadrática, fazendo com que a saída do processo siga um sinal de referência desejado. Foram desenvolvidos dois aplicativos, ambos na plataforma Builder C++, o primeiro realiza a identificação, via redes neurais e o segundo é responsável pelo controle do processo. As ferramentas aqui implementadas e aplicadas são genéricas, ambas permitem a aplicação da estrutura de controle a qualquer novo processo

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reliable data transfer is one of the most difficult tasks to be accomplished in multihop wireless networks. Traditional transport protocols like TCP face severe performance degradation over multihop networks given the noisy nature of wireless media as well as unstable connectivity conditions in place. The success of TCP in wired networks motivates its extension to wireless networks. A crucial challenge faced by TCP over these networks is how to operate smoothly with the 802.11 wireless MAC protocol which also implements a retransmission mechanism at link level in addition to short RTS/CTS control frames for avoiding collisions. These features render TCP acknowledgments (ACK) transmission quite costly. Data and ACK packets cause similar medium access overheads despite the much smaller size of the ACKs. In this paper, we further evaluate our dynamic adaptive strategy for reducing ACK-induced overhead and consequent collisions. Our approach resembles the sender side's congestion control. The receiver is self-adaptive by delaying more ACKs under nonconstrained channels and less otherwise. This improves not only throughput but also power consumption. Simulation evaluations exhibit significant improvement in several scenarios

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A reliable and robust routing service for Flying Ad-Hoc Networks (FANETs) must be able to adapt to topology changes, and also to recover the quality level of the delivered multiple video flows under dynamic network topologies. The user experience on watching live videos must also be satisfactory even in scenarios with network congestion, buffer overflow, and packet loss ratio, as experienced in many FANET multimedia applications. In this paper, we perform a comparative simulation study to assess the robustness, reliability, and quality level of videos transmitted via well-known beaconless opportunistic routing protocols. Simulation results shows that our developed protocol XLinGO achieves multimedia dissemination with Quality of Experience (QoE) support and robustness in a multi-hop, multi-flow, and mobile networks, as required in many multimedia FANET scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an Ontology-Based multi-technology platform as part of an open energy management system which also comprises a wireless transducer network for control and monitoring. The platform allows the integration of several building automation protocols, eases the development and implementation of different kinds of services and allows sharing of the data of a building. The system has been implemented and tested in the Energy Efficiency Research Facility at CeDInt-UPM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las prestaciones y características de los dispositivos móviles actuales los sitúa a un nivel similar a los ordenadores de escritorio tradicionales en cuanto a funcionalidad y posibilidades de uso, añadiendo además la movilidad y la sensación de pertenencia al usuario que se deriva de ésta. Estas cualidades convierten a las plataformas móviles de computación en verdaderos ordenadores personales, y cada día es más popular su utilización en ámbitos distintos del ocio y las comunicaciones propiamente dichas, pasando a convertirse en herramientas de apoyo a la productividad también en el entorno profesional y corporativo. La utilización del dispositivo móvil como parte de una infraestructura de telecomunicaciones da lugar a nuevas expresiones de problemas clásicos de gestión y seguridad. Para tratar de abordarlos con la flexibilidad y la escalabilidad necesarias se plantean alternativas novedosas que parten de enfoques originales a estos problemas, como las ideas y conceptos que se engloban en la filosofía del Control de Acceso a la Red (Network Access Control, o NAC). La mayoría de los planteamientos de NAC se basan, en el ámbito de la seguridad, en comprobar ciertas características del dispositivo móvil para tratar de determinar hasta qué punto puede éste suponer una amenaza para los recursos de la red u otros usuarios de la misma. Obtener esta información de forma fiable resulta extremadamente difícil si se caracteriza el dispositivo mediante un modelo de caja blanca, muy adecuado dada la apertura propia de los sistemas operativos móviles actuales, muy diferentes de los de antaño, y la ausencia de un marco de seguridad efectivo en ellos. Este trabajo explora el Estado de la Técnica en este ámbito de investigación y plantea diferentes propuestas orientadas a cubrir las deficiencias de las soluciones propuestas hasta el momento y a satisfacer los estrictos requisitos de seguridad que se derivan de la aplicación del modelo de caja blanca, materializándose en última instancia en la definición de un mecanismo de evaluación de características arbitrarias de un cierto dispositivo móvil basado en Entornos Seguros de Ejecución (Trusted Execution Environments, o TEEs) con elevadas garantías de seguridad compatible con los planteamientos actuales de NAC. ABSTRACT The performance and features of today’s mobile devices make them able to compete with traditional desktop computers in terms of functionality and possible usages. In addition to this, they sport mobility and the stronger sense of ownership that derives from it. These attributes change mobile computation platforms into truly personal computers, allowing them to be used not only for leisure or as mere communications devices, but also as supports of productivity in professional and corporative environments. The utilization of mobile devices as part of a telecommunications infrastructure brings new expressions of classic management and security problems with it. In order to tackle them with appropriate flexibility and scalability, new alternatives are proposed based on original approaches to these problems, such as the concepts and ideas behind the philosophy of Network Access Control (NAC). The vast majority of NAC proposals are based, security-wise, on checking certain mobile device’s properties in order to evaluate how probable it is for it to become a threat for network resources or even other users of the infrastructure. Obtaining this information in a reliable and trustworthy way is extremely difficult if the device is characterized using a white-box model, which is the most appropriate if the openness of today’s mobile operating systems, very different from former ones, and the absence of an effective security framework are taken into account. This work explores the State of the Art related with the aforementioned field of research and presents different proposals targeted to overcome the deficiencies of current solutions and satisfy the strict security requirements derived from the application of the white box model. These proposals are ultimately materialized in the definition of a high-security evaluation procedure of arbitrary properties of a given mobile device based on Trusted Execution Environments (TEEs) which is compatible with modern NAC approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Um das principais características da tecnologia de virtualização é a Live Migration, que permite que máquinas virtuais sejam movimentadas entre máquinas físicas sem a interrupção da execução. Esta característica habilita a implementação de políticas mais sofisticadas dentro de um ambiente de computação na nuvem, como a otimização de uso de energia elétrica e recursos computacionais. Entretanto, a Live Migration pode impor severa degradação de desempenho nas aplicações das máquinas virtuais e causar diversos impactos na infraestrutura dos provedores de serviço, como congestionamento de rede e máquinas virtuais co-existentes nas máquinas físicas. Diferente de diversos estudos, este estudo considera a carga de trabalho da máquina virtual um importante fator e argumenta que escolhendo o momento adequado para a migração da máquina virtual pode-se reduzir as penalidades impostas pela Live Migration. Este trabalho introduz a Application-aware Live Migration (ALMA), que intercepta as submissões de Live Migration e, baseado na carga de trabalho da aplicação, adia a migração para um momento mais favorável. Os experimentos conduzidos neste trabalho mostraram que a arquitetura reduziu em até 74% o tempo das migrações para os experimentos com benchmarks e em até 67% os experimentos com carga de trabalho real. A transferência de dados causada pela Live Migration foi reduzida em até 62%. Além disso, o presente introduz um modelo que faz a predição do custo da Live Migration para a carga de trabalho e também um algoritmo de migração que não é sensível à utilização de memória da máquina virtual.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To exploit the popularity of TCP as still the dominant sender and protocol of choice for transporting data reliably across the heterogeneous Internet, this thesis explores end-to-end performance issues and behaviours of TCP senders when transferring data to wireless end-users. The theme throughout is on end-users located specifically within 802.11 WLANs at the edges of the Internet, a largely untapped area of work. To exploit the interests of researchers wanting to study the performance of TCP accurately over heterogeneous conditions, this thesis proposes a flexible wired-to-wireless experimental testbed that better reflects conditions in the real-world. To exploit the transparent functionalities between TCP in the wired domain and the IEEE 802.11 WLAN protocols, this thesis proposes a more accurate methodology for gauging the transmission and error characteristics of real-world 802.11 WLANs. It also aims to correlate any findings with the functionality of fixed TCP senders. To exploit the popularity of Linux as a popular operating system for many of the Internet’s data servers, this thesis studies and evaluates various sender-side TCP congestion control implementations within the recent Linux v2.6. A selection of the implementations are put under systematic testing using real-world wired-to-wireless conditions in order to screen and present a viable candidate/s for further development and usage in the modern-day heterogeneous Internet. Overall, this thesis comprises a set of systematic evaluations of TCP senders over 802.11 WLANs, incorporating measurements in the form of simulations, emulations, and through the use of a real-world-like experimental testbed. The goal of the work is to ensure that all aspects concerned are comprehensively investigated in order to establish rules that can help to decide under which circumstances the deployment of TCP is optimal i.e. a set of paradigms for advancing the state-of-the-art in data transport across the Internet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.