508 resultados para SURVEILLANCE NETWORK TRANSNET


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a group maintenance scheduling case study for a water distributed network. This water pipeline network presents the challenge of maintaining aging pipelines with the associated increases in annual maintenance costs. The case study focuses on developing an effective maintenance plan for the water utility. Current replacement planning is difficult as it needs to balance the replacement needs under limited budgets. A Maintenance Grouping Optimization (MGO) model based on a modified genetic algorithm was utilized to develop an optimum group maintenance schedule over a 20-year cycle. The adjacent geographical distribution of pipelines was used as a grouping criterion to control the searching space of the MGO model through a Judgment Matrix. Based on the optimum group maintenance schedule, the total cost was effectively reduced compared with the schedules without grouping maintenance jobs. This optimum result can be used as a guidance to optimize the current maintenance plan for the water utility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Characteristics of surveillance video generally include low resolution and poor quality due to environmental, storage and processing limitations. It is extremely difficult for computers and human operators to identify individuals from these videos. To overcome this problem, super-resolution can be used in conjunction with an automated face recognition system to enhance the spatial resolution of video frames containing the subject and narrow down the number of manual verifications performed by the human operator by presenting a list of most likely candidates from the database. As the super-resolution reconstruction process is ill-posed, visual artifacts are often generated as a result. These artifacts can be visually distracting to humans and/or affect machine recognition algorithms. While it is intuitive that higher resolution should lead to improved recognition accuracy, the effects of super-resolution and such artifacts on face recognition performance have not been systematically studied. This paper aims to address this gap while illustrating that super-resolution allows more accurate identification of individuals from low-resolution surveillance footage. The proposed optical flow-based super-resolution method is benchmarked against Baker et al.’s hallucination and Schultz et al.’s super-resolution techniques on images from the Terrascope and XM2VTS databases. Ground truth and interpolated images were also tested to provide a baseline for comparison. Results show that a suitable super-resolution system can improve the discriminability of surveillance video and enhance face recognition accuracy. The experiments also show that Schultz et al.’s method fails when dealing surveillance footage due to its assumption of rigid objects in the scene. The hallucination and optical flow-based methods performed comparably, with the optical flow-based method producing less visually distracting artifacts that interfered with human recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Almost all metapopulation modelling assumes that connectivity between patches is only a function of distance, and is therefore symmetric. However, connectivity will not depend only on the distance between the patches, as some paths are easy to traverse, while others are difficult. When colonising organisms interact with the heterogeneous landscape between patches, connectivity patterns will invariably be asymmetric. There have been few attempts to theoretically assess the effects of asymmetric connectivity patterns on the dynamics of metapopulations. In this paper, we use the framework of complex networks to investigate whether metapopulation dynamics can be determined by directly analysing the asymmetric connectivity patterns that link the patches. Our analyses focus on “patch occupancy” metapopulation models, which only consider whether a patch is occupied or not. We propose three easily calculated network metrics: the “asymmetry” and “average path strength” of the connectivity pattern, and the “centrality” of each patch. Together, these metrics can be used to predict the length of time a metapopulation is expected to persist, and the relative contribution of each patch to a metapopulation’s viability. Our results clearly demonstrate the negative effect that asymmetry has on metapopulation persistence. Complex network analyses represent a useful new tool for understanding the dynamics of species existing in fragmented landscapes, particularly those existing in large metapopulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report provides an evaluation of the current available evidence-base for identification and surveillance of product-related injuries in children in Queensland. While the focal population was children in Queensland, the identification of information needs and data sources for product safety surveillance has applicability nationally for all age groups. The report firstly summarises the data needs of product safety regulators regarding product-related injury in children, describing the current sources of information informing product safety policy and practice, and documenting the priority product surveillance areas affecting children which have been a focus over recent years in Queensland. Health data sources in Queensland which have the potential to inform product safety surveillance initiatives were evaluated in terms of their ability to address the information needs of product safety regulators. Patterns in product-related injuries in children were analysed using routinely available health data to identify areas for future intervention, and the patterns in product-related injuries in children identified in health data were compared to those identified by product safety regulators. Recommendations were made for information system improvements and improved access to and utilisation of health data for more proactive approaches to product safety surveillance in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Innovation is vital for the future of Australia.s internet economy. Innovations rely on businesses. ability to innovate. Businesses. ability to innovate relies on their employees. The more these individual end users engage in the internet economy, the better businesses. engagement will be. The less these individual end users engage, the less likely a business is to engage and innovate. This means, for the internet economy to function at its fullest potential, it is essential that individual Australians have the capacity to engage with it and participate in it. The Australian federal government is working to facilitate the internet economy through policies, legislation and practices that implement high-speed broadband. The National Broadband Network will be a vital tool for Australia.s internet economy. Its .chief importance¡® is that it will provide faster internet access speeds that will facilitate access to internet services and content. However, an appropriate infrastructure and internet speed is only part of the picture. As the Organisation for Economic Co-operation and Development identified, appropriate government policies are also needed to ensure that vital services are more accessible by consumers. The thesis identifies essential theories and principles underpinning the internet economy and from which the concept of connectedness is developed. Connectedness is defined as the ability of end users to connect with internet content and services, other individuals and organisations, and government. That is, their ability to operate in the internet economy. The NBN will be vital in ensuring connectedness into the future. What is not currently addressed by existing access regimes is how to facilitate end user access capacity and participation. The thesis concludes by making recommendations to the federal government as to what the governing principles of the Australian internet economy should include in order to enable individual end user access capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the development of ICD-11 progresses, the Australian Bureau of Statistics is beginning to consider what will be required to successfully implement the new version of the classification. This paper will present early thoughts on the following: building understanding amongst the user community of upcoming changes and the implications of those changes; the need for training of coders and data users; development of analytical methods and conduct of comparability studies; processes to test, accept and implement new or updated coding software; assessment of coding quality; changes to data analyses and reporting processes; updates to regular publications; and assessing the resources required for successful implementation.