999 resultados para Decoupling networks


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network-based Intrusion Detection Systems (NIDSs) monitor network traffic for signs of malicious activities that have the potential to disrupt entire network infrastructures and services. NIDS can only operate when the network traffic is available and can be extracted for analysis. However, with the growing use of encrypted networks such as Virtual Private Networks (VPNs) that encrypt and conceal network traffic, a traditional NIDS can no longer access network traffic for analysis. The goal of this research is to address this problem by proposing a detection framework that allows a commercial off-the-shelf NIDS to function normally in a VPN without any modification. One of the features of the proposed framework is that it does not compromise on the confidentiality afforded by the VPN. Our work uses a combination of Shamir’s secret-sharing scheme and randomised network proxies to securely route network traffic to the NIDS for analysis. The detection framework is effective against two general classes of attacks – attacks targeted at the network hosts or attacks targeted at framework itself. We implement the detection framework as a prototype program and evaluate it. Our evaluation shows that the framework does indeed detect these classes of attacks and does not introduce any additional false positives. Despite the increase in network overhead in doing so, the proposed detection framework is able to consistently detect intrusions through encrypted networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Wireless Sensor Network (WSN) is a set of sensors that are integrated with a physical environment. These sensors are small in size, and capable of sensing physical phenomena and processing them. They communicate in a multihop manner, due to a short radio range, to form an Ad Hoc network capable of reporting network activities to a data collection sink. Recent advances in WSNs have led to several new promising applications, including habitat monitoring, military target tracking, natural disaster relief, and health monitoring. The current version of sensor node, such as MICA2, uses a 16 bit, 8 MHz Texas Instruments MSP430 micro-controller with only 10 KB RAM, 128 KB program space, 512 KB external ash memory to store measurement data, and is powered by two AA batteries. Due to these unique specifications and a lack of tamper-resistant hardware, devising security protocols for WSNs is complex. Previous studies show that data transmission consumes much more energy than computation. Data aggregation can greatly help to reduce this consumption by eliminating redundant data. However, aggregators are under the threat of various types of attacks. Among them, node compromise is usually considered as one of the most challenging for the security of WSNs. In a node compromise attack, an adversary physically tampers with a node in order to extract the cryptographic secrets. This attack can be very harmful depending on the security architecture of the network. For example, when an aggregator node is compromised, it is easy for the adversary to change the aggregation result and inject false data into the WSN. The contributions of this thesis to the area of secure data aggregation are manifold. We firstly define the security for data aggregation in WSNs. In contrast with existing secure data aggregation definitions, the proposed definition covers the unique characteristics that WSNs have. Secondly, we analyze the relationship between security services and adversarial models considered in existing secure data aggregation in order to provide a general framework of required security services. Thirdly, we analyze existing cryptographic-based and reputationbased secure data aggregation schemes. This analysis covers security services provided by these schemes and their robustness against attacks. Fourthly, we propose a robust reputationbased secure data aggregation scheme for WSNs. This scheme minimizes the use of heavy cryptographic mechanisms. The security advantages provided by this scheme are realized by integrating aggregation functionalities with: (i) a reputation system, (ii) an estimation theory, and (iii) a change detection mechanism. We have shown that this addition helps defend against most of the security attacks discussed in this thesis, including the On-Off attack. Finally, we propose a secure key management scheme in order to distribute essential pairwise and group keys among the sensor nodes. The design idea of the proposed scheme is the combination between Lamport's reverse hash chain as well as the usual hash chain to provide both past and future key secrecy. The proposal avoids the delivery of the whole value of a new group key for group key update; instead only the half of the value is transmitted from the network manager to the sensor nodes. This way, the compromise of a pairwise key alone does not lead to the compromise of the group key. The new pairwise key in our scheme is determined by Diffie-Hellman based key agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Online social networking has become one of the most popular Internet applications in the modern era. They have given the Internet users, access to information that other Internet based applications are unable to. Although many of the popular online social networking web sites are focused towards entertainment purposes, sharing information can benefit the healthcare industry in terms of both efficiency and effectiveness. But the capability to share personal information; the factor which has made online social networks so popular, is itself a major obstacle when considering information security and privacy aspects. Healthcare can benefit from online social networking if they are implemented such that sensitive patient information can be safeguarded from ill exposure. But in an industry such as healthcare where the availability of information is crucial for better decision making, information must be made available to the appropriate parties when they require it. Hence the traditional mechanisms for information security and privacy protection may not be suitable for healthcare. In this paper we propose a solution to privacy enhancement in online healthcare social networks through the use of an information accountability mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public awareness of large infrastructure projects, many of which are delivered through networked arrangements is high for several reasons. These projects often involve significant public investment; they may involve multiple and conflicting stakeholders and can potentially have significant environmental impacts (Lim and Yang, 2008). To produce positive outcomes from infrastructure delivery it is imperative that stakeholder “buy in” be obtained particularly about decisions relating to the scale and location of infrastructure. Given the likelihood that stakeholders will have different levels of interest and investment in project outcomes, failure to manage this dynamic could potentially jeopardise project delivery by delaying or halting the construction of essential infrastructure. Consequently, stakeholder engagement has come to constitute a critical activity in infrastructure development delivered through networks. This paper draws on stakeholder theory and governance network theory and provides insights into how three multi-level networks within the Roads Alliance in Queensland engage with stakeholders in the delivery of road infrastructure. New knowledge about stakeholders has been obtained by testing a model of Stakeholder Salience and Engagement which combines and extends the stakeholder identification and salience theory and the ladder of stakeholder management and engagement. By applying this model, the broad research question: “How do governance networks engage with stakeholders?” has been addressed. A multiple embedded case study design was selected as the overall approach to explore, describe, explain and evaluate how stakeholder engagement occurred in three governance networks delivering road infrastructure in Queensland. The outcomes of this research contribute to and extend stakeholder theory by showing how stakeholder salience impacts on decisions about the types of engagement processes implemented. Governance network theory is extended by showing how governance networks interact with stakeholders. From a practical perspective this research provides governance networks with an indication of how to more effectively undertake engagement with different types of stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract As regional and continental carbon balances of terrestrial ecosystems become available, it becomes clear that the soils are the largest source of uncertainty. Repeated inventories of soil organic carbon (SOC) organized in soil monitoring networks (SMN) are being implemented in a number of countries. This paper reviews the concepts and design of SMNs in ten countries, and discusses the contribution of such networks to reducing the uncertainty of soil carbon balances. Some SMNs are designed to estimate country-specific land use or management effects on SOC stocks, while others collect soil carbon and ancillary data to provide a nationally consistent assessment of soil carbon condition across the major land-use/soil type combinations. The former use a single sampling campaign of paired sites, while for the latter both systematic (usually grid based) and stratified repeated sampling campaigns (5–10 years interval) are used with densities of one site per 10–1,040 km². For paired sites, multiple samples at each site are taken in order to allow statistical analysis, while for the single sites, composite samples are taken. In both cases, fixed depth increments together with samples for bulk density and stone content are recommended. Samples should be archived to allow for re-measurement purposes using updated techniques. Information on land management, and where possible, land use history should be systematically recorded for each site. A case study of the agricultural frontier in Brazil is presented in which land use effect factors are calculated in order to quantify the CO2 fluxes from national land use/management conversion matrices. Process-based SOC models can be run for the individual points of the SMN, provided detailed land management records are available. These studies are still rare, as most SMNs have been implemented recently or are in progress. Examples from the USA and Belgium show that uncertainties in SOC change range from 1.6–6.5 Mg C ha−1 for the prediction of SOC stock changes on individual sites to 11.72 Mg C ha−1 or 34% of the median SOC change for soil/land use/climate units. For national SOC monitoring, stratified sampling sites appears to be the most straightforward attribution of SOC values to units with similar soil/land use/climate conditions (i.e. a spatially implicit upscaling approach). Keywords Soil monitoring networks - Soil organic carbon - Modeling - Sampling design

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes and evaluates the novel utility of network methods for understanding human interpersonal interactions within social neurobiological systems such as sports teams. We show how collective system networks are supported by the sum of interpersonal interactions that emerge from the activity of system agents (such as players in a sports team). To test this idea we trialled the methodology in analyses of intra-team collective behaviours in the team sport of water polo. We observed that the number of interactions between team members resulted in varied intra-team coordination patterns of play, differentiating between successful and unsuccessful performance outcomes. Future research on small-world networks methodologies needs to formalize measures of node connections in analyses of collective behaviours in sports teams, to verify whether a high frequency of interactions is needed between players in order to achieve competitive performance outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The major purpose of Vehicular Ad Hoc Networks (VANETs) is to provide safety-related message access for motorists to react or make a life-critical decision for road safety enhancement. Accessing safety-related information through the use of VANET communications, therefore, must be protected, as motorists may make critical decisions in response to emergency situations in VANETs. If introducing security services into VANETs causes considerable transmission latency or processing delays, this would defeat the purpose of using VANETs to improve road safety. Current research in secure messaging for VANETs appears to focus on employing certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes an efficient public key management system for VANETs: the Public Key Registry (PKR) system. Not only does this paper demonstrate that the proposed PKR system can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC scheme. It is believed that the proposed PKR system will create a new dimension to the key management and verification services for VANETs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the rapid increase in electrical energy demand, power generation in the form of distributed generation is becoming more important. However, the connections of distributed generators (DGs) to a distribution network or a microgrid can create several protection issues. The protection of these networks using protective devices based only on current is a challenging task due to the change in fault current levels and fault current direction. The isolation of a faulted segment from such networks will be difficult if converter interfaced DGs are connected as these DGs limit their output currents during the fault. Furthermore, if DG sources are intermittent, the current sensing protective relays are difficult to set since fault current changes with time depending on the availability of DG sources. The system restoration after a fault occurs is also a challenging protection issue in a converter interfaced DG connected distribution network or a microgrid. Usually, all the DGs will be disconnected immediately after a fault in the network. The safety of personnel and equipment of the distribution network, reclosing with DGs and arc extinction are the major reasons for these DG disconnections. In this thesis, an inverse time admittance (ITA) relay is proposed to protect a distribution network or a microgrid which has several converter interfaced DG connections. The ITA relay is capable of detecting faults and isolating a faulted segment from the network, allowing unfaulted segments to operate either in grid connected or islanded mode operations. The relay does not make the tripping decision based on only the fault current. It also uses the voltage at the relay location. Therefore, the ITA relay can be used effectively in a DG connected network in which fault current level is low or fault current level changes with time. Different case studies are considered to evaluate the performance of the ITA relays in comparison to some of the existing protection schemes. The relay performance is evaluated in different types of distribution networks: radial, the IEEE 34 node test feeder and a mesh network. The results are validated through PSCAD simulations and MATLAB calculations. Several experimental tests are carried out to validate the numerical results in a laboratory test feeder by implementing the ITA relay in LabVIEW. Furthermore, a novel control strategy based on fold back current control is proposed for a converter interfaced DG to overcome the problems associated with the system restoration. The control strategy enables the self extinction of arc if the fault is a temporary arc fault. This also helps in self system restoration if DG capacity is sufficient to supply the load. The coordination with reclosers without disconnecting the DGs from the network is discussed. This results in increased reliability in the network by reduction of customer outages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mixed use typologies and pedestrian networks are two strategies commonly applied in design of the contemporary city. These approaches, aimed towards the creation of a more sustainalble urban environment, have their roots in the traditional, pre-industrial towns; they characterize urban form, articulating the tension between privaate and public realms through a series of typological variations as well as stimulating commercial activity in the city centre. Arcades, loggias and verandas are just some of the elements which can mediate this tension. Historically they have defined physical and social spaces with particular character; in the contemporary city these features are applied to deform the urban form and create a porous, dynamic morphology. This paper, comparing case studies from Italy, Japan and Australia, investigates how the design of the transition zone can define hybrid pedestrian networks, where a clear distinction between the public and private realms is no longer applicable. Pedestrians use the city in a dynamic way, combining trajectories on the public street with ones on the fringe or inside of the private built environment. In some cases, cities offer different pedestrian network possibilities at different times, as the commercial precints are subject to variations in accessibility across various timeframes. These walkable systems have an impact on the urban form and identity of places, redefining typologies and requiring an in depth analysis through plan, section and elevation diagrams.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ethernet is a key component of the standards used for digital process buses in transmission substations, namely IEC 61850 and IEEE Std 1588-2008 (PTPv2). These standards use multicast Ethernet frames that can be processed by more than one device. This presents some significant engineering challenges when implementing a sampled value process bus due to the large amount of network traffic. A system of network traffic segregation using a combination of Virtual LAN (VLAN) and multicast address filtering using managed Ethernet switches is presented. This includes VLAN prioritisation of traffic classes such as the IEC 61850 protocols GOOSE, MMS and sampled values (SV), and other protocols like PTPv2. Multicast address filtering is used to limit SV/GOOSE traffic to defined subsets of subscribers. A method to map substation plant reference designations to multicast address ranges is proposed that enables engineers to determine the type of traffic and location of the source by inspecting the destination address. This method and the proposed filtering strategy simplifies future changes to the prioritisation of network traffic, and is applicable to both process bus and station bus applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.