956 resultados para network protocol
Resumo:
Supervisory Control and Data Acquisition (SCADA) systems are one of the key foundations of smart grids. The Distributed Network Protocol version 3 (DNP3) is a standard SCADA protocol designed to facilitate communications in substations and smart grid nodes. The protocol is embedded with a security mechanism called Secure Authentication (DNP3-SA). This mechanism ensures that end-to-end communication security is provided in substations. This paper presents a formal model for the behavioural analysis of DNP3-SA using Coloured Petri Nets (CPN). Our DNP3-SA CPN model is capable of testing and verifying various attack scenarios: modification, replay and spoofing, combined complex attack and mitigation strategies. Using the model has revealed a previously unidentified flaw in the DNP3-SA protocol that can be exploited by an attacker that has access to the network interconnecting DNP3 devices. An attacker can launch a successful attack on an outstation without possessing the pre-shared keys by replaying a previously authenticated command with arbitrary parameters. We propose an update to the DNP3-SA protocol that removes the flaw and prevents such attacks. The update is validated and verified using our CPN model proving the effectiveness of the model and importance of the formal protocol analysis.
A LIN inspired optical bus for signal isolation in multilevel or modular power electronic converters
Resumo:
Proposed in this paper is a low-cost, half-duplex optical communication bus for control signal isolation in modular or multilevel power electronic converters. The concept is inspired by the Local Interconnect Network (LIN) serial network protocol as used in the automotive industry. The proposed communications bus utilises readily available optical transceivers and is suitable for use with low-cost microcontrollers for distributed control of multilevel converters. As a signal isolation concept, the proposed optical bus enables very high cell count modular multilevel cascaded converters (MMCCs) for high-bandwidth, high-voltage and high-power applications. Prototype hardware is developed and the optical bus concept is validated experimentally in a 33-level MMCC converter operating at 120 Vrms and 60 Hz.
Resumo:
Web GIS作为一门新兴的技术学科 ,依托互联网的环境和 Internet实用技术 ,成为地理信息系统理论研究和技术创新的增长点。随着网络技术和应用市场的不断发展和扩大 ,将促进 GIS新兴信息产业的形成。本文结合工作实践 ,论述了 Web GIS的技术背景与发展 ,以及系统功能等基础理论 ,重点讨论了 Web GIS的体系结构、实现策略和技术方法
Resumo:
设计了基于CAN协议的AUV内部通讯总线系统 .系统通过协议转换器的模块化、可配置性设计满足AUV系统对其内部通讯总线的开放性要求 .协议转换器内部的容错处理能力以及紧急事件处理节点的设计为增强AUV系统的可靠性和容错能力、为避免AUV在深海工作环境下丢失增加有力的保障措施
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
In the modern society, communications and digital transactions are becoming the norm rather than the exception. As we allow networked computing devices into our every-day actions, we build a digital lifestyle where networks and devices enrich our interactions. However, as we move our information towards a connected digital environment, privacy becomes extremely important as most of our personal information can be found in the network. This is especially relevant as we design and adopt next generation networks that provide ubiquitous access to services and content, increasing the impact and pervasiveness of existing networks. The environments that provide widespread connectivity and services usually rely on network protocols that have few privacy considerations, compromising user privacy. The presented work focuses on the network aspects of privacy, considering how network protocols threaten user privacy, especially on next generation networks scenarios. We target the identifiers that are present in each network protocol and support its designed function. By studying how the network identifiers can compromise user privacy, we explore how these threats can stem from the identifier itself and from relationships established between several protocol identifiers. Following the study focused on identifiers, we show that privacy in the network can be explored along two dimensions: a vertical dimension that establishes privacy relationships across several layers and protocols, reaching the user, and a horizontal dimension that highlights the threats exposed by individual protocols, usually confined to a single layer. With these concepts, we outline an integrated perspective on privacy in the network, embracing both vertical and horizontal interactions of privacy. This approach enables the discussion of several mechanisms to address privacy threats on individual layers, leading to architectural instantiations focused on user privacy. We also show how the different dimensions of privacy can provide insight into the relationships that exist in a layered network stack, providing a potential path towards designing and implementing future privacy-aware network architectures.
Resumo:
Uma nova área tecnológica está em crescente desenvolvimento. Esta área, denominada de internet das coisas, surge na necessidade de interligar vários objetos para uma melhoria a nível de serviços ou necessidades por parte dos utilizadores. Esta dissertação concentra-se numa área específica da tecnologia internet das coisas que é a sensorização. Esta rede de sensorização é implementada pelo projeto europeu denominado de Future Cities [1] onde se cria uma infraestrutura de investigação e validação de projetos e serviços inteligentes na cidade do Porto. O trabalho realizado nesta dissertação insere-se numa das plataformas existentes nessa rede de sensorização: a plataforma de sensores ambientais intitulada de UrbanSense. Estes sensores ambientais que estão incorporados em Data Collect Unit (DCU), também denominados por nós, medem variáveis ambientais tais como a temperatura, humidade, ozono e monóxido de carbono. No entanto, os nós têm recursos limitados em termos de energia, processamento e memória. Apesar das grandes evoluções a nível de armazenamento e de processamento, a nível energético, nomeadamente nas baterias, não existe ainda uma evolução tão notável, limitando a sua operacionalidade [2]. Esta tese foca-se, essencialmente, na melhoria do desempenho energético da rede de sensores UrbanSense. A principal contribuição é uma adaptação do protocolo de redes Ad Hoc OLSR (Optimized Link State Routing Protocol) para ser usado por nós alimentados a energia renovável, de forma a aumentar a vida útil dos nós da rede de sensorização. Com esta contribuição é possível obter um maior número de dados durante períodos de tempo mais longos, aproximadamente 10 horas relativamente às 7 horas anteriores, resultando numa maior recolha e envio dos mesmos com uma taxa superior, cerca de 500 KB/s. Existindo deste modo uma aproximação analítica dos vários parâmetros existentes na rede de sensorização. Contudo, o aumento do tempo de vida útil dos nós sensores com recurso à energia renovável, nomeadamente, energia solar, incrementa o seu peso e tamanho que limita a sua mobilidade. Com o referido acréscimo a determinar e a limitar a sua mobilidade exigindo, por isso, um planeamento prévio da sua localização. Numa primeira fase do trabalho analisou-se o consumo da DCU, visto serem estes a base na infraestrutura e comunicando entre si por WiFi ou 3G. Após uma análise dos protocolos de routing com iv suporte para parametrização energética, a escolha recaiu sobre o protocolo OLSR devido à maturidade e compatibilidade com o sistema atual da DCU, pois apesar de existirem outros protocolos, a implementação dos mesmos, não se encontram disponível como software aberto. Para a validação do trabalho realizado na presente dissertação, é realizado um ensaio prévio sem a energia renovável, para permitir caracterização de limitações do sistema. Com este ensaio, tornou-se possível verificar a compatibilidade entre os vários materiais e ajustamento de estratégias. Num segundo teste de validação é concretizado um ensaio real do sistema com 4 nós a comunicar, usando o protocolo com eficiência energética. O protocolo é avaliado em termos de aumento do tempo de vida útil do nó e da taxa de transferência. O desenvolvimento da análise e da adaptação do protocolo de rede Ad Hoc oferece uma maior longevidade em termos de tempo de vida útil, comparando ao que existe durante o processamento de envio de dados. Apesar do tempo de longevidade ser inferior, quando o parâmetro energético se encontra por omissão com o fator 3, a realização da adaptação do sistema conforme a energia, oferece uma taxa de transferência maior num período mais longo. Este é um fator favorável para a abertura de novos serviços de envio de dados em tempo real ou envio de ficheiros com um tamanho mais elevado.
Resumo:
TCP flows from applications such as the web or ftp are well supported by a Guaranteed Minimum Throughput Service (GMTS), which provides a minimum network throughput to the flow and, if possible, an extra throughput. We propose a scheme for a GMTS using Admission Control (AC) that is able to provide different minimum throughput to different users and that is suitable for "standard" TCP flows. Moreover, we consider a multidomain scenario where the scheme is used in one of the domains, and we propose some mechanisms for the interconnection with neighbor domains. The whole scheme uses a small set of packet classes in a core-stateless network where each class has a different discarding priority in queues assigned to it. The AC method involves only edge nodes and uses a special probing packet flow (marked as the highest discarding priority class) that is sent continuously from ingress to egress through a path. The available throughput in the path is obtained at the egress using measurements of flow aggregates, and then it is sent back to the ingress. At the ingress each flow is detected using an implicit way and then it is admission controlled. If it is accepted, it receives the GMTS and its packets are marked as the lowest discarding priority classes; otherwise, it receives a best-effort service. The scheme is evaluated through simulation in a simple "bottleneck" topology using different traffic loads consisting of "standard" TCP flows that carry files of varying sizes
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Engenharia Elétrica - FEB
Resumo:
Las limitaciones de las tecnologías de red actuales, identificadas en la Agencia de Proyectos de Investigación Avanzados para la Defensa (DARPA) durante 1995, han originado recientemente una propuesta de modelo de red denominado Redes Activas. En este modelo, los nodos proporcionan un entorno de ejecución sobre el que se ejecuta el código asociado a cada paquete. El objetivo es disponer de una tecnología de red que permita que nuevos servicios de red sean desarrollados e instalados rápidamente sin modificar los nodos de la red. Un servicio de red que se puede beneficiar de esta tecnología es la transmisión de datos en multipunto con diferentes grados fiabilidad. Las propuestas actuales de servicios de multipunto fiable proporcionan una solución específica para cada clase de aplicaciones, y los protocolos existentes extremo a extremo sufren de limitaciones técnicas relacionadas con una fiabilidad limitada, y con la ausencia de mecanismos de control de congestión efectivos. Esta tesis realiza propuestas originales conducentes a solucionar parte de las limitaciones actuales en el ámbito de Redes Activas y multipunto fiable con control de congestión. En primer lugar, se especificará un servicio genérico de multipunto fiable que, basándose en los requisitos de una serie de aplicaciones consideradas relevantes, proporcione diferentes clases de sesiones y grados de fiabilidad. Partiendo de la definición del servicio genérico especificado, se diseñará un protocolo de comunicaciones sobre la tecnología de Redes Activas que proporcione dicho servicio. El protocolo diseñado estará dotado de un mecanismo de control de congestión para que la fuente ajuste dinámicamente el tráfico inyectado a las condiciones de carga de la red. En esta tesis se pretende también profundizar en el estudio y análisis de la tecnología de Redes Activas, experimentando con dicha tecnología para proporcionar una realimentación a sus diseñadores. Dicha experimentación se realizará en tres ámbitos: el de los servicios y protocolos que puede soportar, el del modelo y arquitectura de las Redes Activas y el de las plataformas de ejecución disponibles. Como aportación adicional de este trabajo, se validarán los objetivos anteriores mediante una implementación piloto de las entidades de protocolo y de su interfaz de servicio sobre uno de los entornos de ejecución disponibles. Abstract The limitations of current networking technologies identified in the Defense Advance Research Projects Agency (DARPA) along 1995 have led to a recent proposal of a new network model called Active Networks. In this model, the nodes provide an execution environment over which the code used to process each packet is executed. The objective is a network technology that allows the fast design and deployment of new network services without requiring the modification of the network nodes. One network service that could benefit from this technology is the transmission of multicast data with different types of loss tolerance. The current proposals for reliable multicast services provide specific solutions for each application class, and existing end-to-end protocols suffer from technical drawbacks related to limited reliability and lack of an effective congestion control mechanism. This thesis contains original proposals that aim to solve part of the current drawbacks in the scope of Active Networks and reliable multicast with congestion control. Firstly, a generic reliable multicast network service will be specified. This service will be designed from the requirements of a relevant set of applications, and will provide different session classes and different types of reliability. Then, a network protocol based on Active Network technology will be designed such that it provides the specified network service. This protocol will incorporate a congestion control mechanism capable of performing an automatic adjustment of the traffic injected by the source to the available network capacity. This thesis will also contribute to a deeper study and analysis of Active Network technology, by experimenting with the technology in order to provide feedback to its designers. This experimentation will be done attending to three different scopes: support of Active Network for services and protocols, Active Network model and architecture, and currently available Active Network execution environments. As an additional contribution of this work, the previous objectives will be validated through a prototype implementation of the protocol entities and the service interface based on one of the current execution environments.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
La capacidad de comunicación de los seres humanos ha crecido gracias a la evolución de dispositivos móviles cada vez más pequeños, manejables, potentes, de mayor autonomía y más asequibles. Esta tendencia muestra que en un futuro próximo cercano cada persona llevaría consigo por lo menos un dispositivo de altas prestaciones. Estos dispositivos tienen incorporados algunas formas de comunicación: red de telefonía, redes inalámbricas, bluetooth, entre otras. Lo que les permite también ser empleados para la configuración de redes móviles Ad Hoc. Las redes móviles Ad Hoc, son redes temporales y autoconfigurables, no necesitan un punto de acceso para que los nodos intercambien información entre sí. Cada nodo realiza las tareas de encaminador cuando sea requerido. Los nodos se pueden mover, cambiando de ubicación a discreción. La autonomía de estos dispositivos depende de las estrategias de como sus recursos son utilizados. De tal forma que los protocolos, algoritmos o modelos deben ser diseñados de forma eficiente para no impactar el rendimiento del dispositivo, siempre buscando un equilibrio entre sobrecarga y usabilidad. Es importante definir una gestión adecuada de estas redes especialmente cuando estén siendo utilizados en escenarios críticos como los de emergencias, desastres naturales, conflictos bélicos. La presente tesis doctoral muestra una solución eficiente para la gestión de redes móviles Ad Hoc. La solución contempla dos componentes principales: la definición de un modelo de gestión para redes móviles de alta disponibilidad y la creación de un protocolo de enrutamiento jerárquico asociado al modelo. El modelo de gestión propuesto, denominado High Availability Management Ad Hoc Network (HAMAN), es definido en una estructura de cuatro niveles, acceso, distribución, inteligencia e infraestructura. Además se describen los componentes de cada nivel: tipos de nodos, protocolos y funcionamiento. Se estudian también las interfaces de comunicación entre cada componente y la relación de estas con los niveles definidos. Como parte del modelo se diseña el protocolo de enrutamiento Ad Hoc, denominado Backup Cluster Head Protocol (BCHP), que utiliza como estrategia de encaminamiento el empleo de cluster y jerarquías. Cada cluster tiene un Jefe de Cluster que concentra la información de enrutamiento y de gestión y la envía al destino cuando esta fuera de su área de cobertura. Para mejorar la disponibilidad de la red el protocolo utiliza un Jefe de Cluster de Respaldo el que asume las funciones del nodo principal del cluster cuando este tiene un problema. El modelo HAMAN es validado a través de un proceso la simulación del protocolo BCHP. El protocolo BCHP se implementa en la herramienta Network Simulator 2 (NS2) para ser simulado, comparado y contrastado con el protocolo de enrutamiento jerárquico Cluster Based Routing Protocol (CBRP) y con el protocolo de enrutamiento Ad Hoc reactivo denominado Ad Hoc On Demand Distance Vector Routing (AODV). Abstract The communication skills of humans has grown thanks to the evolution of mobile devices become smaller, manageable, powerful, more autonomy and more affordable. This trend shows that in the near future each person will carry at least one high-performance device. These high-performance devices have some forms of communication incorporated: telephony network, wireless networks, bluetooth, among others. What can also be used for configuring mobile Ad Hoc networks. Ad Hoc mobile networks, are temporary and self-configuring networks, do not need an access point for exchange information between their nodes. Each node performs the router tasks as required. The nodes can move, change location at will. The autonomy of these devices depends on the strategies of how its resources are used. So that the protocols, algorithms or models should be designed to efficiently without impacting device performance seeking a balance between overhead and usability. It is important to define appropriate management of these networks, especially when being used in critical scenarios such as emergencies, natural disasters, wars. The present research shows an efficient solution for managing mobile ad hoc networks. The solution comprises two main components: the definition of a management model for highly available mobile networks and the creation of a hierarchical routing protocol associated with the model. The proposed management model, called High Availability Management Ad Hoc Network (HAMAN) is defined in a four-level structure: access, distribution, intelligence and infrastructure. The components of each level: types of nodes, protocols, structure of a node are shown and detailed. It also explores the communication interfaces between each component and the relationship of these with the levels defined. The Ad Hoc routing protocol proposed, called Backup Cluster Head Protocol( BCHP), use of cluster and hierarchies like strategies. Each cluster has a cluster head which concentrates the routing information and management and sent to the destination when out of cluster coverage area. To improve the availability of the network protocol uses a Backup Cluster Head who assumes the functions of the node of the cluster when it has a problem. The HAMAN model is validated accross the simulation of their BCHP routing protocol. BCHP protocol has been implemented in the simulation tool Network Simulator 2 (NS2) to be simulated, compared and contrasted with a hierarchical routing protocol Cluster Based Routing Protocol (CBRP) and a routing protocol called Reactive Ad Hoc On Demand Distance Vector Routing (AODV).