963 resultados para NETWORK REDUCTION
Resumo:
Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.
Resumo:
There is worldwide interest in reducing aircraft emissions. The difficulty of reducing emissions including water vapour, carbon dioxide (CO2) and oxides of nitrogen (NOx) is mainly due from the fact that a commercial aircraft is usually designed for a particular optimal cruise altitude but may be requested or required to operate and deviate at different altitude and speeds to archive a desired or commanded flight plan, resulting in increased emissions. This is a multi- disciplinary problem with multiple trade-offs such as optimising engine efficiency, minimising fuel burnt, minimise emissions while maintaining aircraft separation and air safety. This project presents the coupling of an advanced optimisation technique with mathematical models and algorithms for aircraft emission reduction through flight optimisation. Numerical results show that the method is able to capture a set of useful trade-offs between aircraft range and NOx, and mission fuel consumption and NOx. In addition, alternative cruise operating conditions including Mach and altitude that produce minimum NOx and CO2 (minimum mission fuel weight) are suggested.
Resumo:
Graphene, functionalized with oleylamine (OA) and soluble in non-polar organic solvents, was produced on a large scale with a high yield by combining the Hummers process for graphite oxidation, an amine-coupling process to make OA-functionalized graphite oxide (OA-GO), and a novel reduction process using trioctylphosphine (TOP). TOP acts as both a reducing agent and an aggregation-prevention surfactant in the reduction of OA-GO in 1,2-dichlorobenzene (DCB). The reduction of OA-GO is confirmed by X-ray photoelectron spectroscopy, Fourier-transform infrared spectroscopy, X-ray diffraction, thermogravimetric analysis, and Raman spectroscopy. The exfoliation of GO, OA GO, and OA-functionalized graphene (OA-G) is verified by atomic force microscopy. The conductivity of TOP-reduced OA G, which is deduced from the current–voltage characteristics of a vacuum-filtered thin film, shows that the reduction of functionalized GO by TOP is as effective as the reduction of GO by hydrazine.
Resumo:
Almost all metapopulation modelling assumes that connectivity between patches is only a function of distance, and is therefore symmetric. However, connectivity will not depend only on the distance between the patches, as some paths are easy to traverse, while others are difficult. When colonising organisms interact with the heterogeneous landscape between patches, connectivity patterns will invariably be asymmetric. There have been few attempts to theoretically assess the effects of asymmetric connectivity patterns on the dynamics of metapopulations. In this paper, we use the framework of complex networks to investigate whether metapopulation dynamics can be determined by directly analysing the asymmetric connectivity patterns that link the patches. Our analyses focus on “patch occupancy” metapopulation models, which only consider whether a patch is occupied or not. We propose three easily calculated network metrics: the “asymmetry” and “average path strength” of the connectivity pattern, and the “centrality” of each patch. Together, these metrics can be used to predict the length of time a metapopulation is expected to persist, and the relative contribution of each patch to a metapopulation’s viability. Our results clearly demonstrate the negative effect that asymmetry has on metapopulation persistence. Complex network analyses represent a useful new tool for understanding the dynamics of species existing in fragmented landscapes, particularly those existing in large metapopulations.
Resumo:
Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.
Resumo:
Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.
Resumo:
Innovation is vital for the future of Australia.s internet economy. Innovations rely on businesses. ability to innovate. Businesses. ability to innovate relies on their employees. The more these individual end users engage in the internet economy, the better businesses. engagement will be. The less these individual end users engage, the less likely a business is to engage and innovate. This means, for the internet economy to function at its fullest potential, it is essential that individual Australians have the capacity to engage with it and participate in it. The Australian federal government is working to facilitate the internet economy through policies, legislation and practices that implement high-speed broadband. The National Broadband Network will be a vital tool for Australia.s internet economy. Its .chief importance¡® is that it will provide faster internet access speeds that will facilitate access to internet services and content. However, an appropriate infrastructure and internet speed is only part of the picture. As the Organisation for Economic Co-operation and Development identified, appropriate government policies are also needed to ensure that vital services are more accessible by consumers. The thesis identifies essential theories and principles underpinning the internet economy and from which the concept of connectedness is developed. Connectedness is defined as the ability of end users to connect with internet content and services, other individuals and organisations, and government. That is, their ability to operate in the internet economy. The NBN will be vital in ensuring connectedness into the future. What is not currently addressed by existing access regimes is how to facilitate end user access capacity and participation. The thesis concludes by making recommendations to the federal government as to what the governing principles of the Australian internet economy should include in order to enable individual end user access capacity.
Resumo:
A crucial contemporary policy question for governments across the globe is how to cope with international crime and terrorist networks. Many such “dark” networks—that is, networks that operate covertly and illegally—display a remarkable level of resilience when faced with shocks and attacks. Based on an in-depth study of three cases (MK, the armed wing of the African National Congress in South Africa during apartheid; FARC, the Marxist guerrilla movement in Colombia; and the Liberation Tigers of Tamil Eelam, LTTE, in Sri Lanka), we present a set of propositions to outline how shocks impact dark network characteristics (resources and legitimacy) and networked capabilities (replacing actors, linkages, balancing integration and differentiation) and how these in turn affect a dark network's resilience over time. We discuss the implications of our findings for policymakers.
Resumo:
This paper presents a “research frame” which we have found useful in analyzing complex socio- technical situations. The research frame is based on aspects of actor-network theory: “interressment”, “enrollment”, “points of passage” and the “trial of strength”. Each of these aspects are described in turn, making clear their purpose in the overall research frame. Having established the research frame it is used to analyse two examples. First, the use of speech recognition technology is examined in two different contexts, showing how to apply the frame to compare and contrast current situations. Next, a current medical consultation context is described and the research frame is used to consider how it could change with innovative technology. In both examples, the research frame shows that the use of an artefact or technology must be considered together with the context in which it is used.
Resumo:
Signal-degrading speckle is one factor that can reduce the quality of optical coherence tomography images. We demonstrate the use of a hierarchical model-based motion estimation processing scheme based on an affine-motion model to reduce speckle in optical coherence tomography imaging, by image registration and the averaging of multiple B-scans. The proposed technique is evaluated against other methods available in the literature. The results from a set of retinal images show the benefit of the proposed technique, which provides an improvement in signal-to-noise ratio of the square root of the number of averaged images, leading to clearer visual information in the averaged image. The benefits of the proposed technique are also explored in the case of ocular anterior segment imaging.