790 resultados para computer network security


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of the distributed information measurement and control system for optical spectral research of particle beam and plasma objects and the execution of laboratory works on Physics and Engineering Department of Petrozavodsk State University are described. At the hardware level the system is represented by a complex of the automated workplaces joined into computer network. The key element of the system is the communication server, which supports the multi-user mode and distributes resources among clients, monitors the system and provides secure access. Other system components are formed by equipment servers (CАМАC and GPIB servers, a server for the access to microcontrollers MCS-196 and others) and the client programs that carry out data acquisition, accumulation and processing and management of the course of the experiment as well. In this work the designed by the authors network interface is discussed. The interface provides the connection of measuring and executive devices to the distributed information measurement and control system via Ethernet. This interface allows controlling of experimental parameters by use of digital devices, monitoring of experiment parameters by polling of analog and digital sensors. The device firmware is written in assembler language and includes libraries for Ethernet-, IP-, TCP- и UDP-packets forming.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mediation techniques provide interoperability and support integrated query processing among heterogeneous databases. While such techniques help data sharing among different sources, they increase the risk for data security, such as violating access control rules. Successful protection of information by an effective access control mechanism is a basic requirement for interoperation among heterogeneous data sources. ^ This dissertation first identified the challenges in the mediation system in order to achieve both interoperability and security in the interconnected and collaborative computing environment, which includes: (1) context-awareness, (2) semantic heterogeneity, and (3) multiple security policy specification. Currently few existing approaches address all three security challenges in mediation system. This dissertation provides a modeling and architectural solution to the problem of mediation security that addresses the aforementioned security challenges. A context-aware flexible authorization framework was developed in the dissertation to deal with security challenges faced by mediation system. The authorization framework consists of two major tasks, specifying security policies and enforcing security policies. Firstly, the security policy specification provides a generic and extensible method to model the security policies with respect to the challenges posed by the mediation system. The security policies in this study are specified by 5-tuples followed by a series of authorization constraints, which are identified based on the relationship of the different security components in the mediation system. Two essential features of mediation systems, i. e., relationship among authorization components and interoperability among heterogeneous data sources, are the focus of this investigation. Secondly, this dissertation supports effective access control on mediation systems while providing uniform access for heterogeneous data sources. The dynamic security constraints are handled in the authorization phase instead of the authentication phase, thus the maintenance cost of security specification can be reduced compared with related solutions. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Las Universidades han tenido que adaptarse a los nuevos modelos de comunicación surgidos en la época de Internet. Dentro de estos nuevos paradigmas las redes sociales han irrumpido y Twitter se ha establecido como una de las más importantes. El objetivo de esta investigación es demostrar que existe una relación entre la presencia online de una Universidad, definida por la cantidad de información disponible en Internet, y su cuenta en Twitter. Para ello se analizó la relación entre la presencia online y los perfiles oficiales de las cinco universidades del País Vasco y Navarra. Los resultados demostraron la existencia de una correlación significativa entre la presencia online de las instituciones y el número de seguidores de sus respectivas cuentas. En segundo lugar, esta investigación se planteó si Twitter puede servir para potenciar la presencia online de una Universidad. Es por eso que se formuló una segunda hipótesis que buscaba analizar si tener varias cuentas en Twitter aumentaría la presencia online de las Universidades. Los hallazgos para esta segunda hipótesis demostraron una correlación muy significativa entre tener varios perfiles en Twitter y la presencia online de las Universidades. Así queda demostrada la importancia de la presencia online para las cuentas de Twitter y la relevancia de Twitter a la hora de potenciar la presencia online de los centros.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cybercriminals ramp up their efforts with sophisticated techniques while defenders gradually update their typical security measures. Attackers often have a long-term interest in their targets. Due to a number of factors such as scale, architecture and nonproductive traffic however it makes difficult to detect them using typical intrusion detection techniques. Cyber early warning systems (CEWS) aim at alerting such attempts in their nascent stages using preliminary indicators. Design and implementation of such systems involves numerous research challenges such as generic set of indicators, intelligence gathering, uncertainty reasoning and information fusion. This paper discusses such challenges and presents the reader with compelling motivation. A carefully deployed empirical analysis using a real world attack scenario and a real network traffic capture is also presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With wireless vehicular communications, Vehicular Ad Hoc Networks (VANETs) enable numerous applications to enhance traffic safety, traffic efficiency, and driving experience. However, VANETs also impose severe security and privacy challenges which need to be thoroughly investigated. In this dissertation, we enhance the security, privacy, and applications of VANETs, by 1) designing application-driven security and privacy solutions for VANETs, and 2) designing appealing VANET applications with proper security and privacy assurance. First, the security and privacy challenges of VANETs with most application significance are identified and thoroughly investigated. With both theoretical novelty and realistic considerations, these security and privacy schemes are especially appealing to VANETs. Specifically, multi-hop communications in VANETs suffer from packet dropping, packet tampering, and communication failures which have not been satisfyingly tackled in literature. Thus, a lightweight reliable and faithful data packet relaying framework (LEAPER) is proposed to ensure reliable and trustworthy multi-hop communications by enhancing the cooperation of neighboring nodes. Message verification, including both content and signature verification, generally is computation-extensive and incurs severe scalability issues to each node. The resource-aware message verification (RAMV) scheme is proposed to ensure resource-aware, secure, and application-friendly message verification in VANETs. On the other hand, to make VANETs acceptable to the privacy-sensitive users, the identity and location privacy of each node should be properly protected. To this end, a joint privacy and reputation assurance (JPRA) scheme is proposed to synergistically support privacy protection and reputation management by reconciling their inherent conflicting requirements. Besides, the privacy implications of short-time certificates are thoroughly investigated in a short-time certificates-based privacy protection (STCP2) scheme, to make privacy protection in VANETs feasible with short-time certificates. Secondly, three novel solutions, namely VANET-based ambient ad dissemination (VAAD), general-purpose automatic survey (GPAS), and VehicleView, are proposed to support the appealing value-added applications based on VANETs. These solutions all follow practical application models, and an incentive-centered architecture is proposed for each solution to balance the conflicting requirements of the involved entities. Besides, the critical security and privacy challenges of these applications are investigated and addressed with novel solutions. Thus, with proper security and privacy assurance, these solutions show great application significance and economic potentials to VANETs. Thus, by enhancing the security, privacy, and applications of VANETs, this dissertation fills the gap between the existing theoretic research and the realistic implementation of VANETs, facilitating the realistic deployment of VANETs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Network monitoring is of paramount importance for effective network management: it allows to constantly observe the network’s behavior to ensure it is working as intended and can trigger both automated and manual remediation procedures in case of failures and anomalies. The concept of SDN decouples the control logic from legacy network infrastructure to perform centralized control on multiple switches in the network, and in this context, the responsibility of switches is only to forward packets according to the flow control instructions provided by controller. However, as current SDN switches only expose simple per-port and per-flow counters, the controller has to do almost all the processing to determine the network state, which causes significant communication overhead and excessive latency for monitoring purposes. The absence of programmability in the data plane of SDN prompted the advent of programmable switches, which allow developers to customize the data-plane pipeline and implement novel programs operating directly in the switches. This means that we can offload certain monitoring tasks to programmable data planes, to perform fine-grained monitoring even at very high packet processing speeds. Given the central importance of network monitoring exploiting programmable data planes, the goal of this thesis is to enable a wide range of monitoring tasks in programmable switches, with a specific focus on the ones equipped with programmable ASICs. Indeed, most network monitoring solutions available in literature do not take computational and memory constraints of programmable switches into due account, preventing, de facto, their successful implementation in commodity switches. This claims that network monitoring tasks can be executed in programmable switches. Our evaluations show that the contributions in this thesis could be used by network administrators as well as network security engineers, to better understand the network status depending on different monitoring metrics, and thus prevent network infrastructure and service outages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Com a criação da teoria das redes, assistiu-se nos últimos anos a uma revolução científica de carácter interdisciplinar Não é uma teoria inteiramente nova, tendo sido precedida pela criação por P. Erdvos, nos anos sessenta, da teoria dos grafos aleatórios. Esta última é uma teoria puramente matemática, donde termos escrito “grafo” em lugar de “rede”. Apenas recentemente podemos falar de uma efectiva teoria das redes reais, e isso devido ao abandono de algumas das ideias essenciais avançadas por Erdvos, em especial a ideia de partir de um conjunto previamente dado de nós, os quais de seguida vão sendo conectados aleatoriamente com probabilidade p. Este quadro geral começou a ser modificado pelo chamado modelo dos “mundo-pequenos” proposto em 1998 por Duncan Watts e Steve Strogatz, modificação que se tornou ainda mais radical quando, em 1999, Albert Barabási e colaboradores propuseram um modelo no qual os nós vão progressivamente nascendo e conectados por uma função de preferência: um nó conecta-se em proporção às ligações que os outros nós já possuem, pelo que quantas mais ligações um nó possui maior a probabilidade de receber ulteriores ligações.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho é baseado no simulador de redes PST2200 do Laboratório de Sistemas de Energia (LSE) pois está avariado com vários problemas conhecidos, designadamente: - Defeito de isolamento (disparo de diferencial), - Desregulação da velocidade da máquina primária (motor DC), - Circuito de excitação da máquina síncrona inoperacional, - Inexistência de esquemas elétricos dos circuitos do simulador, - Medidas desreguladas e com canais de medida com circuito impresso queimado. O trabalho executado foi: - O levantamento e desenho de raiz (não existe qualquer manual) dos esquemas dos 10 módulos do simulador, designadamente naqueles com avaria ou com desempenho problemático a fim de que se possa ter uma visão mais pormenorizada dos circuitos e seus problemas, por forma a intervir para os minimizar e resolver, - Foi realizado o diagnóstico de avaria do simulador e foram propostas soluções para os mesmos, - Realizaram-se as intervenções propostas e aprovadas. Nas intervenções realizadas, os princípios orientadores foram: - Aumentar a robustez do equipamento por forma a garantir a sua integridade a utilizações menos apropriados e manobras 'exóticas' próprias de alunos, que pela sua condição, estão em fase de aprendizagem, - Atualizar o equipamento, colocando-o em sintonia com o 'estado da arte', - Como fator de valorização suplementar, foi concebida e aplicada a supervisão remota do funcionamento do simulador através da rede informática. Foram detetados inúmeros erros: - Má ligação do motor de corrente continua ao variador, resultando a falta de controlo da frequência da rede do sistema, - Ligações entre painéis trocadas resultando em avarias diversas das fontes de alimentação, - Cartas eletrónicas de medidas avariadas e que além de se reparar, foram também calibradas. Devido ao mecenato da empresa Schnitt + Sohn participando monetariamente, fez-se o projeto de alteração e respetiva execução de grande parte do simulador aumentando a fiabilidade do mesmo, diminuindo assim a frequência das avarias naturais mais as que acontecem involuntariamente devido a este ser um instrumento didático. Além do trabalho elétrico, foi feito muito trabalho de chaparia para alteração de estrutura e suporte do material com diferenças de posicionamento. Neste trabalho dá-se também alguns exemplos de cálculo e simulação das redes de transporte que se pode efetuar no simulador como estudo e simulação de avarias num sistema produtivo real. Realizou-se a monitorização de dois aparelhos indicadores de parâmetros de energia (Janitza UMG96S) através duma rede com dois protocolos ethernet e profibus utilizando o plc (Omron CJ2M) como valorização do trabalho.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ao longo dos anos a Internet tornou-se uma ferramenta fundamental para a sociedade e, nos dias de hoje, é praticamente inevitável não usufruir de algumas facilidades proporcionadas pela rede mundial. Devido à sua massificação nos últimos anos, os endereços de IP disponíveis esgotaram-se, pelo que tornou-se necessário a elaboração de uma nova versão do protocolo comunicação, utilizado para suportar todas as comunicações na Internet, o Internet Protocol, versão 6 (IPv6). Apesar da ampla utilização da Internet, a maioria dos seus utilizadores está completamente alheia às questões de segurança, estando por isso exposta a uma diversidade de perigos. O aumento da segurança é também uma das principais missões do IPv6, tendo-se introduzido alguns mecanismos de segurança relevantes. Este trabalho tem como objetivo estudar o IPv6, focando-se especialmente em questões relacionadas com os mecanismos de transição do IPv4 para IPv6 e em aspetos de segurança. Proporcionando uma abordagem teórica ao protocolo e aos conceitos de segurança, este documento apresenta também uma perspetiva mais técnica da implementação do IPv6, pretendendo ser um manual de apoio aos responsáveis pela implementação da versão 6 do IP. Os três métodos de transição, que permitem a atualização do IPv4 para IPv6, são analisados de forma a apoiar a equipa na tomada de decisão sobre qual (ou quais) os métodos de transição a utilizar. Uma parte substancial do trabalho foi dedicada à seleção e estudo de vulnerabilidades que se encontram presentes no IPv6, a forma como são exploradas por parte do atacante, a forma como podem ser classificadas e os processos que diminuem o risco de exposição a essas mesmas vulnerabilidades. Um conjunto de boas práticas na administração da segurança de redes é também apresentada, para melhorar a garantia de que problemas conhecidos não possam ser explorados por utilizadores mal intencionados.