960 resultados para Network architecture


Relevância:

60.00% 60.00%

Publicador:

Resumo:

As redes neurais artificiais têm provado serem uma poderosa técnica na resolução de uma grande variedade de problemas de otimização. Nesta dissertação é desenvolvida uma nova rede neural, tipo recorrente, sem realimentação (self-feedback loops) e sem neurônios ocultos, para o processamento do sinal sísmico, para fornecer a posição temporal, a polaridade e as amplitudes estimadas dos refletores sísmicos, representadas pelos seus coeficientes de reflexão. A principal característica dessa nova rede neural consiste no tipo de função de ativação utilizada, a qual permite três possíveis estados para o neurônio. Busca-se estimar a posição dos refletores sísmicos e reproduzir as verdadeiras polaridades desses refletores. A idéia básica desse novo tipo de rede, aqui denominada rede neural discreta (RND), é relacionar uma função objeto, que descreve o problema geofísico, com a função de Liapunov, que descreve a dinâmica da rede neural. Deste modo, a dinâmica da rede leva a uma minimização local da sua função de Liapunov e consequentemente leva a uma minimização da função objeto. Assim, com uma codificação conveniente do sinal de saída da rede tem-se uma solução do problema geofísico. A avaliação operacional da arquitetura desta rede neural artificial é realizada em dados sintéticos gerados através do modelo convolucional simples e da teoria do raio. A razão é para explicar o comportamento da rede com dados contaminados por ruído, e diante de pulsos fonte de fases mínima, máxima e misturada.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Apesar do avanço tecnológico ocorrido na prospecção sísmica, com a rotina dos levantamentos 2D e 3D, e o significativo aumento na quantidade de dados, a identificação dos tempos de chegada da onda sísmica direta (primeira quebra), que se propaga diretamente do ponto de tiro até a posição dos arranjos de geofones, permanece ainda dependente da avaliação visual do intérprete sísmico. O objetivo desta dissertação, insere-se no processamento sísmico com o intuito de buscar um método eficiente, tal que possibilite a simulação computacional do comportamento visual do intérprete sísmico, através da automação dos processos de tomada de decisão envolvidos na identificação das primeiras quebras em um traço sísmico. Visando, em última análise, preservar o conhecimento intuitivo do intérprete para os casos complexos, nos quais o seu conhecimento será, efetivamente, melhor aproveitado. Recentes descobertas na tecnologia neurocomputacional produziram técnicas que possibilitam a simulação dos aspectos qualitativos envolvidos nos processos visuais de identificação ou interpretação sísmica, com qualidade e aceitabilidade dos resultados. As redes neurais artificiais são uma implementação da tecnologia neurocomputacional e foram, inicialmente, desenvolvidas por neurobiologistas como modelos computacionais do sistema nervoso humano. Elas diferem das técnicas computacionais convencionais pela sua habilidade em adaptar-se ou aprender através de uma repetitiva exposição a exemplos, pela sua tolerância à falta de alguns dos componentes dos dados e pela sua robustez no tratamento com dados contaminados por ruído. O método aqui apresentado baseia-se na aplicação da técnica das redes neurais artificiais para a identificação das primeiras quebras nos traços sísmicos, a partir do estabelecimento de uma conveniente arquitetura para a rede neural artificial do tipo direta, treinada com o algoritmo da retro-propagação do erro. A rede neural artificial é entendida aqui como uma simulação computacional do processo intuitivo de tomada de decisão realizado pelo intérprete sísmico para a identificação das primeiras quebras nos traços sísmicos. A aplicabilidade, eficiência e limitações desta abordagem serão avaliadas em dados sintéticos obtidos a partir da teoria do raio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the new active absorption wave basin, named Hydrodynamic Calibrator (HC), constructed at the University of São Paulo (USP), in the Laboratory facilities of the Numerical Offshore Tank (TPN). The square (14 m 14 m) tank is able to generate and absorb waves from 0.5 Hz to 2.0 Hz, by means of 148 active hinged flap wave makers. An independent mechanical system drives each flap by means of a 1HP servo-motor and a ball-screw based transmission system. A customized ultrasonic wave probe is installed in each flap, and is responsible for measuring wave elevation in the flap. A complex automation architecture was implemented, with three Programmable Logic Computers (PLCs), and a low-level software is responsible for all the interlocks and maintenance functions of the tank. Furthermore, all the control algorithms for the generation and absorption are implemented using higher level software (MATLAB /Simulink block diagrams). These algorithms calculate the motions of the wave makers both to generate and absorb the required wave field by taking into account the layout of the flaps and the limits of wave generation. The experimental transfer function that relates the flap amplitude to the wave elevation amplitude is used for the calculation of the motion of each flap. This paper describes the main features of the tank, followed by a detailed presentation of the whole automation system. It includes the measuring devices, signal conditioning, PLC and network architecture, real-time and synchronizing software and motor control loop. Finally, a validation of the whole automation system is presented, by means of the experimental analysis of the transfer function of the waves generated and the calculation of all the delays introduced by the automation system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The presented thesis describes the formation of functional neuronal networks on an underlying micropattern. Small circuits of interconnected neurons defined by the geometry of the patterned substrate could be observed and were utilised as a model system of reduced complexity for the behaviour of neuronal network formation and activity. The first set of experiments was conducted to investigate aspects of the substrate preparation. Micropatterned substrates were created by microcontact printing of physiological proteins onto polystyrene culture dishes. The substrates displayed a high contrast between the repellant background and the cell attracting pattern, such that neurons seeded onto these surfaces aligned with the stamped structure. Both the patterning process and the cell culture were optimised, yielding highly compliant low-density networks of living neuronal cells. In the second step, cellular physiology of the cells grown on these substrates was investigated by patch-clamp measurements and compared to cells cultivated under control conditions. It could be shown that the growth on a patterned substrate did not result in an impairment of cellular integrity nor that it had an impact on synapse formation or synaptic efficacy. Due to the extremely low-density cell culture that was applied, cellular connectivity through chemical synapses could be observed at the single cell level. Having established that single cells were not negatively affected by the growth on patterned substrates, aspects of network formation were investigated. The formation of physical contact between two cells was analysed through microinjection studies and related to the rate at which functional synaptic contacts formed between two neighbouring cells. Surprisingly, the rate of synapse formation between physically contacting cells was shown to be unaltered in spite of the drastic reduction of potential interaction partners on the micropattern. Additional features of network formation were investigated and found consistent with results reported by other groups: A different rate of synapse formation by excitatory and inhibitory neurons could be reproduced as well as a different rate of frequency-dependent depression at excitatory and inhibitory synapses. Furthermore, regarding simple feedback loops, a significant enrichment of reciprocal connectivity between mixed pairs of excitatory and inhibitory neurons relative to uniform pairs could be demonstrated. This phenomenon has also been described by others in unpatterned cultures [Muller, 1997] and may therefore be a feature underlying neuronal network formation in general. Based on these findings, it can be assumed that inherent features of neuronal behaviour and cellular recognition mechanisms were found in the cultured networks and appear to be undisturbed by patterned growth. At the same time, it was possible to reduce the complexity of the forming networks dramatically in a cell culture on a patterned surface. Thus, features of network architecture and synaptic connectivity could be investigated on the single cell level under highly defined conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of Tissue Engineering is to develop biological substitutes that will restore lost morphological and functional features of diseased or damaged portions of organs. Recently computer-aided technology has received considerable attention in the area of tissue engineering and the advance of additive manufacture (AM) techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. To regenerate tissues more efficiently, an ideal scaffold should have appropriate porosity and pore structure. More sophisticated porous configurations with higher architectures of the pore network and scaffolding structures that mimic the intricate architecture and complexity of native organs and tissues are then required. This study adopts a macro-structural shape design approach to the production of open porous materials (Titanium foams), which utilizes spatial periodicity as a simple way to generate the models. From among various pore architectures which have been studied, this work simulated pore structure by triply-periodic minimal surfaces (TPMS) for the construction of tissue engineering scaffolds. TPMS are shown to be a versatile source of biomorphic scaffold design. A set of tissue scaffolds using the TPMS-based unit cell libraries was designed. TPMS-based Titanium foams were meant to be printed three dimensional with the relative predicted geometry, microstructure and consequently mechanical properties. Trough a finite element analysis (FEA) the mechanical properties of the designed scaffolds were determined in compression and analyzed in terms of their porosity and assemblies of unit cells. The purpose of this work was to investigate the mechanical performance of TPMS models trying to understand the best compromise between mechanical and geometrical requirements of the scaffolds. The intention was to predict the structural modulus in open porous materials via structural design of interconnected three-dimensional lattices, hence optimising geometrical properties. With the aid of FEA results, it is expected that the effective mechanical properties for the TPMS-based scaffold units can be used to design optimized scaffolds for tissue engineering applications. Regardless of the influence of fabrication method, it is desirable to calculate scaffold properties so that the effect of these properties on tissue regeneration may be better understood.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

cAMP-response element binding (CREB) proteins are involved in transcriptional regulation in a number of cellular processes (e.g., neural plasticity and circadian rhythms). The CREB family contains activators and repressors that may interact through positive and negative feedback loops. These loops can be generated by auto- and cross-regulation of expression of CREB proteins, via CRE elements in or near their genes. Experiments suggest that such feedback loops may operate in several systems (e.g., Aplysia and rat). To understand the functional implications of such feedback loops, which are interlocked via cross-regulation of transcription, a minimal model with a positive and negative loop was developed and investigated using bifurcation analysis. Bifurcation analysis revealed diverse nonlinear dynamics (e.g., bistability and oscillations). The stability of steady states or oscillations could be changed by time delays in the synthesis of the activator (CREB1) or the repressor (CREB2). Investigation of stochastic fluctuations due to small numbers of molecules of CREB1 and CREB2 revealed a bimodal distribution of CREB molecules in the bistability region. The robustness of the stable HIGH and LOW states of CREB expression to stochastic noise differs, and a critical number of molecules was required to sustain the HIGH state for days or longer. Increasing positive feedback or decreasing negative feedback also increased the lifetime of the HIGH state, and persistence of this state may correlate with long-term memory formation. A critical number of molecules was also required to sustain robust oscillations of CREB expression. If a steady state was near a deterministic Hopf bifurcation point, stochastic resonance could induce oscillations. This comparative analysis of deterministic and stochastic dynamics not only provides insights into the possible dynamics of CREB regulatory motifs, but also demonstrates a framework for understanding other regulatory processes with similar network architecture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

cAMP-response element binding (CREB) proteins are involved in transcriptional regulation in a number of cellular processes (e.g., neural plasticity and circadian rhythms). The CREB family contains activators and repressors that may interact through positive and negative feedback loops. These loops can be generated by auto- and cross-regulation of expression of CREB proteins, via CRE elements in or near their genes. Experiments suggest that such feedback loops may operate in several systems (e.g., Aplysia and rat). To understand the functional implications of such feedback loops, which are interlocked via cross-regulation of transcription, a minimal model with a positive and negative loop was developed and investigated using bifurcation analysis. Bifurcation analysis revealed diverse nonlinear dynamics (e.g., bistability and oscillations). The stability of steady states or oscillations could be changed by time delays in the synthesis of the activator (CREB1) or the repressor (CREB2). Investigation of stochastic fluctuations due to small numbers of molecules of CREB1 and CREB2 revealed a bimodal distribution of CREB molecules in the bistability region. The robustness of the stable HIGH and LOW states of CREB expression to stochastic noise differs, and a critical number of molecules was required to sustain the HIGH state for days or longer. Increasing positive feedback or decreasing negative feedback also increased the lifetime of the HIGH state, and persistence of this state may correlate with long-term memory formation. A critical number of molecules was also required to sustain robust oscillations of CREB expression. If a steady state was near a deterministic Hopf bifurcation point, stochastic resonance could induce oscillations. This comparative analysis of deterministic and stochastic dynamics not only provides insights into the possible dynamics of CREB regulatory motifs, but also demonstrates a framework for understanding other regulatory processes with similar network architecture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ('winner-take-all', WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless communication is the transfer of information from one place to another without using wires. From the earliest times, humans have felt the need to develop techniques of remote communication. From this need arose the smoke signals, communication by sun reflection in mirrors and so on. But today the telecommunications electronic devices such as telephone, television, radio or computer. Radio and television are used for one-way communication. Telephone and computer are used for two-way communication. In wireless networks there is almost unlimited mobility, we can access the network almost anywhere or anytime. In wired networks we have the restriction of using the services in fixed area services. The demand of the wireless is increasing very fast; everybody wants broadband services anywhere anytime. WiMAX (Worldwide Interoperability for Microwave Access) is a broadband wireless technology based on IEEE 802.16-2004 and IEEE 802.16e-2005 that appears to solve this demand. WIMAX is a system that allows wireless data transmission in areas of up to 48 km of radius. It is designed as a wireless alternative to ADSL and a way to connect nodes in wireless metropolitan areas network. Unlike wireless systems that are limited in most cases, about 100 meter, providing greater coverage and more bandwidth. WIMAX promises to achieve high data transmission rates over large areas with a great amount of users. This alternative to the networks of broadband access common as DSL o Wi-Fi, can give broadband access to places quickly to rural areas and developing areas around the world. This paper is a study of WIMAX technology and market situation. First, the paper is responsible for explaining the technical aspects of WIMAX. For this gives an overview of WIMAX standards, physical layer, MAC layer and WiMAX, Technology and Market Beijing University of Post and Telecommunications 2 WIMAX network architecture. Second, the paper address the issue of market in which provides an overview of development and deployment of WIMAX to end the future development trend of WIMAX is addressed. RESUMEN: Por comunicaciones inalámbricas se entiende la transferencia de información desde un lugar a otro sin la necesidad de un soporte físico como es por ejemplo el cable. Por lo que remontándose a los principios de la existencia del ser humano, nos damos cuenta de que el ser humano siempre ha sentido la necesidad de desarrollar técnicas para lograr comunicarse a distancia con sus semejantes. De dicha necesidad, surgieron técnicas tan ancestrales como puede ser la comunicación mediante señales de humo o por reflexión de los rayos solares en espejos entre otras. La curiosidad del ser humano y la necesidad de comunicarse a distancia fue la que llevó a Alexander Graham Bell a inventar el teléfono en 1876. La aparición de un dispositivo que permitía comunicarse a distancia permitiendo escuchar la voz de aquella persona con la que se quería hablar, supuso una revolución no solo en el panorama tecnológico, si no también en el panorama social. Pues a parte de permitir comunicaciones a larga distancia, solventó el problema de la comunicación en “tiempo real”. A raíz de este invento, la tecnología en materia de comunicación ha ido avanzando significativamente, más concretamente en lo referido a las comunicaciones inalámbricas. En 1973 se realizó la primera llamada desde un terminal móvil aunque no fue hasta 1983 cuando se empezó a comercializar dicho terminal, lo que supuso un cambio de hábitos y costumbres para la sociedad. Desde la aparición del primer móvil el crecimiento del mercado ha sido exponencial, lo que ha repercutido en una demanda impensable de nuevas aplicaciones integradas en dichos dispositivos móviles que satisfagan las necesidades que día a día autogenera la sociedad. Tras conseguir realizar llamadas a larga distancia de forma inalámbrica, el siguiente paso fue la creación de los SMS (Short Message System) lo que supuso una nueva revolución además de abaratar costes al usuario a la hora de comunicarse. Pero el gran reto para la industria de las comunicaciones móviles surgió con la aparición de internet. Todo el mundo sentía la necesidad de poder conectarse a esa gran base de datos que es internet en cualquier parte y en cualquier momento. Las primeras conexiones a internet desde dispositivos móviles se realizaron a través de la tecnología WAP (Wireless Application Protocol) hasta la aparición de la tecnología GPRS que permitía la conexión mediante protocolo TCP/IP. A partir de estas conexiones han surgido otras tecnologías, como EDGE, HSDPA, etc., que permitían y permiten la conexión a internet desde dispositivos móviles. Hoy en día la demanda de servicios de red inalámbrica crece de forma rápida y exponencial, todo el mundo quiere servicios de banda ancha en cualquier lugar y en cualquier momento. En este documento se analiza la tecnología WiMAX ( Worldwide Interoperability for Microwave Access) que es una tecnología de banda ancha basada en el estándar IEEE 802.16 creada para brindar servicios a la demanda emergente en la banda ancha desde un punto de vista tecnológico, donde se da una visión de la parte técnica de la tecnología; y desde el punto de vista del mercado, donde se analiza el despliegue y desarrollo de la tecnología desde el punto de vista de negocio. WiMAX es una tecnología que permite la transmisión inalámbrica de datos en áreas de hasta 48Km de radio y que está diseñada como alternativa inalámbrica para ADSL y para conectar nodos de red inalámbrica en áreas metropolitanas. A diferencia de los sistemas inalámbricos existentes que están limitados en su mayoría a unos cientos de metros, WiMAX ofrece una mayor cobertura y un mayor ancho de banda que permita dar soporte a nuevas aplicaciones, además de alcanzar altas tasas de transmisión de datos en grandes áreas con una gran cantidad de usuarios. Se trata de una alternativa a las redes de acceso de banda ancha como DSL o Wi-Fi, que puede dar acceso de banda ancha a lugares tales como zonas rurales o zonas en vías de desarrollo por todo el mundo con rapidez. Existen dos tecnologías de WiMAX, WiMAX fijo (basado en el estándar IEEE 802.16d-2004) y WiMAX móvil (basado en el estándar IEEE 802.16e-2005). La tecnología fija está diseñada para comunicaciones punto a multipunto, mientras que la fija lo está para comunicaciones multipunto a multipunto. WiMAX móvil se basa en la tecnología OFDM que ofrece ventajas en términos de latencia, eficiencia en el uso del espectro y soporte avanzado para antenas. La modulación OFDM es muy robusta frente al multitrayecto, que es muy habitual en los canales de radiodifusión, frente al desvanecimiento debido a las condiciones meteorológicas y frente a las interferencias de RF. Una vez creada la tecnología WiMAX, poseedora de las características idóneas para solventar la demanda del mercado, ha de darse el siguiente paso, hay que convencer a la industria de las telecomunicaciones de que dicha tecnología realmente es la solución para que apoyen su implantación en el mercado de la banda ancha para las redes inalámbricas. Es aquí donde entra en juego el estudio del mercado que se realiza en este documento. WiMAX se enfrenta a un mercado exigente en el que a parte de tener que dar soporte a la demanda técnica, ha de ofrecer una rentabilidad económica a la industria de las comunicaciones móviles y más concretamente a las operadoras móviles que son quienes dentro del sector de las telecomunicaciones finalmente han de confiar en la tecnología para dar soporte a sus usuarios ya que estos al fin y al cabo lo único que quieren es que su dispositivo móvil satisfaga sus necesidades independientemente de la tecnología que utilicen para tener acceso a la red inalámbrica de banda ancha. Quizás el mayor problema al que se ha enfrentado WiMAX haya sido la situación económica en la que se encuentra el mundo. WiMAX a comenzado su andadura en uno de los peores momentos, pero aun así se presenta como una tecnología capaz de ayudar al mundo a salir hacia delante en estos tiempos tan duros. Finalmente se analiza uno de los debates existentes hoy en día en el sector de las comunicaciones móviles, WiMAX vs. LTE. Como se puede observar en el documento realmente una tecnología no saldrá victoriosa frente a la otra, si no que ambas tecnologías podrán coexistir y trabajar de forma conjunta.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents some ideas about a new neural network architecture that can be compared to a Taylor analysis when dealing with patterns. Such architecture is based on lineal activation functions with an axo-axonic architecture. A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties even with lineal activation functions. There exists a clear difference between cooperative and competitive strategies. The former ones are based on the swarm colonies, in which all individuals share its knowledge about the goal in order to pass such information to other individuals to get optimum solution. The latter ones are based on genetic models, that is, individuals can die and new individuals are created combining information of alive one; or are based on molecular/celular behaviour passing information from one structure to another. A swarm-based model is applied to obtain the Neural Network, training the net with a Particle Swarm algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este proyecto está desarrollado sobre la seguridad de redes, y más concretamente en la seguridad perimetral. Para mostrar esto se hará una definición teórico-práctica de un sistema de seguridad perimetral. Para ello se ha desglosado el contenido en dos partes fundamentales, la primera incide en la base teórica relativa a la seguridad perimetral y los elementos más importantes que intervienen en ella, y la segunda parte, que es la implantación de un sistema de seguridad perimetral habitual en un entorno empresarial. En la primera parte se exponen los elementos más importantes de la seguridad perimetral, incidiendo en elementos como pueden ser cortafuegos, IDS/IPS, antivirus, proxies, radius, gestores de ancho de banda, etc. Sobre cada uno de ellos se explica su funcionamiento y posible configuración. La segunda parte y más extensa a la vez que práctica, comprende todo el diseño, implantación y gestión de un sistema de seguridad perimetral típico, es decir, el que sería de aplicación para la mayoría de las empresas actuales. En esta segunda parte se encontrarán primeramente las necesidades del cliente y situación actual en lo que a seguridad se refiere, con los cuales se diseñará la arquitectura de red. Para comenzar será necesario definir formalmente unos requisitos previos, para satisfacer estos requisitos se diseñará el mapa de red con los elementos específicos seleccionados. La elección de estos elementos se hará en base a un estudio de mercado para escoger las mejores soluciones de cada fabricante y que más se adecúen a los requisitos del cliente. Una vez ejecutada la implementación, se diseñará un plan de pruebas, realizando las pruebas de casos de uso de los diferentes elementos de seguridad para asegurar su correcto funcionamiento. El siguiente paso, una vez verificado que todos los elementos funcionan de forma correcta, será diseñar un plan de gestión de la plataforma, en el que se detallan las rutinas a seguir en cada elemento para conseguir que su funcionamiento sea óptimo y eficiente. A continuación se diseña una metodología de gestión, en las que se indican los procedimientos de actuación frente a determinadas incidencias de seguridad, como pueden ser fallos en elementos de red, detección de vulnerabilidades, detección de ataques, cambios en políticas de seguridad, etc. Finalmente se detallarán las conclusiones que se obtienen de la realización del presente proyecto. ABSTRACT. This project is based on network security, specifically on security perimeter. To show this, a theoretical and practical definition of a perimeter security system will be done. This content has been broken down into two main parts. The first part is about the theoretical basis on perimeter security and the most important elements that it involves, and the second part is the implementation of a common perimeter security system in a business environment. The first part presents the most important elements of perimeter security, focusing on elements such as firewalls, IDS / IPS, antivirus, proxies, radius, bandwidth managers, etc... The operation and possible configuration of each one will be explained. The second part is larger and more practical. It includes all the design, implementation and management of a typical perimeter security system which could be applied in most businesses nowadays. The current status as far as security is concerned, and the customer needs will be found in this second part. With this information the network architecture will be designed. In the first place, it would be necessary to define formally a prerequisite. To satisfy these requirements the network map will be designed with the specific elements selected. The selection of these elements will be based on a market research to choose the best solutions for each manufacturer and are most suited to customer requirements. After running the implementation, a test plan will be designed by testing each one of the different uses of all the security elements to ensure the correct operation. In the next phase, once the proper work of all the elements has been verified, a management plan platform will be designed. It will contain the details of the routines to follow in each item to make them work optimally and efficiently. Then, a management methodology will be designed, which provides the procedures for action against certain security issues, such as network elements failures, exploit detection, attack detection, security policy changes, etc.. Finally, the conclusions obtained from the implementation of this project will be detailed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to generate entangled photon pairs over a broad wavelength range opens the door to the simultaneous distribution of entanglement to multiple users in a network by using centralized sources and flexible wavelength-division multiplexing schemes. Here, we show the design of a metropolitan optical network consisting of tree-type access networks, whereby entangled photon pairs are distributed to any pair of users, independent of their location. The network is constructed employing commercial off-the-shelf components and uses the existing infrastructure, which allows for moderate deployment costs. We further develop a channel plan and a network-architecture design to provide a direct optical path between any pair of users; thus, allowing classical and one-way quantum communication, as well as entanglement distribution. This allows the simultaneous operation of multiple quantum information technologies. Finally, we present a more flexible backbone architecture that pushes away the load limitations of the original network design by extending its reach, number of users and capabilities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La presente tesis doctoral presenta una serie de estudios en el campo del patrimonio basados en metodologías de monitorización mediante redes de sensores y técnicas no invasivas con el objetivo de realizar nuevas aportaciones a la conservación preventiva mediante el seguimiento de los daños de deterioro o la prevención de los mismos. Las metodologías de monitorización mediante el despliegue de redes tridimensionales basadas en data loggers abordan estudios microclimáticos, de confort y energéticos a corto plazo, donde se establecen conclusiones relativas a la eficiencia energética de tres sistemas de calefacción muy utilizados en iglesias de la región centro de la Península Ibérica, abordando aspectos de afección de los mismos en el confort de los ocupantes o en el deterioro de los elementos patrimoniales o constructivos. Se desplegaron además distintas plataformas de redes de sensores inalámbricas procediendo a analizar en esta tesis cuál es la que presenta mejores resultados en el ámbito del patrimonio con el objetivo de una monitorización a largo plazo y considerando aspectos de comunicaciones, consumo y configuración de las redes. Una vez conocida la plataforma que presenta mejores resultados comparativos se muestra una metodología de estudio de la calidad de las comunicaciones en múltiples escenarios de patrimonio cultural y natural con la misma, que servirá para establecer una serie de aspectos a considerar en el despliegue de redes de sensores inalámbricas en futuros escenarios a monitorizar. Al igual que ocurre con las redes de sensores basadas en data loggers, las tareas de monitorización desarrolladas en esta tesis mediante el despliegue de las distintas plataformas inalámbricas ha permitido la detección de numerosos fenómenos de deterioro que son descritos a lo largo de la investigación y cuyo seguimiento supone una aportación a la prevención de daños en los distintos escenarios. Asimismo en el desarrollo de la tesis se realiza una aportación para la conservación preventiva mediante la monitorización con distintas técnicas no invasivas como la termografía infrarroja, las medidas de humedad superficial mediante protimeter, las técnicas de prospección de resistividad eléctrica de alta resolución o la prospección georradar. De este modo se desarrollan distintas aportaciones y conclusiones acerca de las ventajas y/o limitaciones de uso de las mismas analizando la idoneidad de aplicar cada una de ellas en distintas fases de análisis o con distintas capacidades de detección o caracterización de los daños. El estudio de imbricación de dichas técnicas ha sido desarrollado en un escenario real que presenta graves daños por humedad, habiendo sido posible la caracterización del origen de los mismos. ABSTRACT This doctoral dissertation discusses field research conducted to monitor heritage assets with sensor networks and other non-invasive techniques. The aim pursued was to contribute to conservation by tracking or preventing decay-induced damage. Monitoring methodologies based on three-dimensional data logger networks were used in short-term micro-climatic, comfort and energy studies to draw conclusions about the energy efficiency of three heating systems widely used in central Iberian churches. The impact of these systems on occupant comfort and decay of heritage or built elements was also explored. Different wireless sensor platforms were deployed and analysed to determine which delivered the best results in the context of long-term heritage monitoring from the standpoints of communications, energy demand and network architecture. A methodology was subsequently designed to study communication quality in a number of cultural and natural heritage scenarios and help establish the considerations to be borne in mind when deploying wireless sensor networks for heritage monitoring in future. As in data logger-based sensor networks, the monitoring conducted in this research with wireless platforms identified many instances of decay, described hereunder. Tracking those situations will help prevent damage in the respective scenarios. The research also contributes to preventive conservation based on non-invasive monitoring using techniques such as infrared thermography, protimeter-based surface damp measurements, high resolution electrical resistivity surveys and georadar analysis. The conclusions drawn address the advantages and drawbacks of each technique and its suitability for the various phases of analysis and capacity to detect or characterise damage. This dissertation also describes the intermeshed usage of these techniques that led to the identification of the origin of severe damp-induced damage in a real scenario.