984 resultados para network simulator
Resumo:
Over the past two decades, the poultry sector in China went through a phase of tremendous growth as well as rapid intensification and concentration. Highly pathogenic avian influenza virus (HPAIV) subtype H5N1 was first detected in 1996 in Guangdong province, South China and started spreading throughout Asia in early 2004. Since then, control of the disease in China has relied heavily on wide-scale preventive vaccination combined with movement control, quarantine and stamping out. This strategy has been successful in drastically reducing the number of outbreaks during the past 5 years. However, HPAIV H5N1 is still circulating and is regularly isolated in traditional live bird markets (LBMs) where viral infection can persist, which represent a public health hazard for people visiting them. The use of social network analysis in combination with epidemiological surveillance in South China has identified areas where the success of current strategies for HPAI control in the poultry production sector may benefit from better knowledge of poultry trading patterns and the LBM network configuration as well as their capacity for maintaining HPAIV H5N1 infection. We produced a set of LBM network maps and estimated the associated risk of HPAIV H5N1 within LBMs and along poultry market chains, providing new insights into how live poultry trade and infection are intertwined. More specifically, our study provides evidence that several biosecurity factors such as daily cage cleaning, daily cage disinfection or manure processing contribute to a reduction in HPAIV H5N1 presence in LBMs. Of significant importance is that the results of our study also show the association between social network indicators and the presence of HPAIV H5N1 in specific network configurations such as the one represented by the counties of origin of the birds traded in LBMs. This new information could be used to develop more targeted and effective control interventions.
Resumo:
This thesis examines the impacts of silvicultural activities on productivity and financial returns of Scots pine (Pinus sylvestris L.) stands on drained peatlands in Finland. The effects of ditch network maintenance operations (DNM) and thinnings, with different timings and intensities, were studied. Based on stand development simulations, the best regimes for different types of stands according to site type, climatic area, and stand silvicultural status were defined from the viewpoint of both wood production and financial profitability. Certain aspects affecting the management outcomes, such as the timing of the first thinning, were examined using data from thinning experiments. Long-term predictions of the impacts of different management regimes were carried out by simulating the development of well-representative model-stands which were composed from appropriate inventory data sets. The MOTTI stand simulator used to perform the simulations enables the predictions by utilizing specific models for drained peatland stands. In addition to natural stand dynamics, these models describe the effects of silvicultural treatments on the development of a given stand. The mean annual increment of merchantable wood (MAImerch) was used as the measure of wood productivity, and the financial feasibility of the regimes was compared using net present value (NPV) analysis. Silvicultural treatments, when applied to appropriately match stand condition, increased both the productivity and financial returns of stand management. Applying DNM resulted in a small increase in MAImerch. When thinning was introduced along with DNM, their combined effect on wood productivity was considerable. According to current operational practices, DNM is generally combined with thinning. In some cases, e.g., in sites of low productivity, the need for DNM may become apparent prior to the thinning stage. As for profitability, thinnings proved to be crucial. The regimes with heavy and late thinnings were generally more profitable than those with normal thinnings. Further, early thinning (relative to stand volume) lacked appeal when seeking a financially profitable removal from the first thinning. In young stands with an initially poor silvicultural condition, however, applying even a low-yielding first thinning considerably increased the NPV when compared to a regime with no thinning at all. Generally, the regimes resulting in the best profitability included heavier thinnings and fewer DNM and thinning treatments than did the regimes resulting in the best yield results. This study demonstrates considerable potential for profitable wood production-oriented management in pine stands on drained peatlands despite their challenging circumstances and long rotations. The results can be used for defining new and more site-specific silvicultural guidelines for various types of drained, pine-dominated peatland stands within the entire range of boreal conditions.
Resumo:
In developing countries high rate of growth in demand of electric energy is felt, and so the addition of new generating units becomes necessary. In deregulated power systems private generating stations are encouraged to add new generations. Finding the appropriate location of new generator to be installed can be obtained by running repeated power flows, carrying system studies like analyzing the voltage profile, voltage stability, loss analysis etc. In this paper a new methodology is proposed which will mainly consider the existing network topology into account. A concept of T-index is introduced in this paper, which considers the electrical distances between generator and load nodes.This index is used for ranking significant new generation expansion locations and also indicates the amount of permissible generations that can be installed at these new locations. This concept facilitates for the medium and long term planning of power generation expansions within the available transmission corridors. Studies carried out on a sample 7-bus system, EHV equivalent 24-bus system and IEEE 39 bus system are presented for illustration purpose.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
The recently developed single network adaptive critic (SNAC) design has been used in this study to design a power system stabiliser (PSS) for enhancing the small-signal stability of power systems over a wide range of operating conditions. PSS design is formulated as a discrete non-linear quadratic regulator problem. SNAC is then used to solve the resulting discrete-time optimal control problem. SNAC uses only a single critic neural network instead of the action-critic dual network architecture of typical adaptive critic designs. SNAC eliminates the iterative training loops between the action and critic networks and greatly simplifies the training procedure. The performance of the proposed PSS has been tested on a single machine infinite bus test system for various system and loading conditions. The proposed stabiliser, which is relatively easier to synthesise, consistently outperformed stabilisers based on conventional lead-lag and linear quadratic regulator designs.
Resumo:
Aurizon, Australia's largest rail freight operator, is introducing the Static Frequency Converter (SFC) technology into its electric railway network as part of the Bauhinia Electrification Project. The introduction of SFCs has significant implications on the protection systems of the 50kV traction network. The traditional distance protection calculation method does not work in this configuration because of the effect that the SFC in combination with the remote grid has on the apparent impedance, and was substantially reviewed. The standard overcurrent (OC) protection scheme is not suitable due to the minimum fault level being below the maximum load level and was revised to incorporate directionality and under-voltage inhibit. Delta protection was reviewed to improve sensitivity. A new protection function was introduced to prevent back-feeding faults in the transmission network through the grid connection. Protection inter-tripping was included to ensure selectivity between the SFC protection and the system downstream.
Resumo:
Distributed renewable energy has become a significant contender in the supply of power in the distribution network in Queensland and throughout the world. As the cost of battery storage falls, distribution utilities turn their attention to the impacts of battery storage and other storage technologies on the low voltage (LV) network. With access to detailed residential energy usage data, Energex's available residential tariffs are investigated for their effectiveness in providing customers with financial incentives to move to Time-of Use based tariffs and to reward use of battery storage.
Resumo:
The relationship for the relaxation time(s) of a chemical reaction in terms of concentrations and rate constants has been derived from the network thermodynamic approach developed by Oster, Perelson, and Katchalsky.Generally, it is necessary to draw the bond graph and the “network analogue” of the reaction scheme, followed by loop or nodal analysis of the network and finally solving of the resulting differential equations. In the case of single-step reactions, however, it is possible to obtain an expression for the relaxation time. This approach is simpler and elegant and has certain advantages over the usual kinetic method. The method has been illustrated by taking different reaction schemes as examples.
Resumo:
Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.
Resumo:
This doctoral dissertation introduces an algorithm for constructing the most probable Bayesian network from data for small domains. The algorithm is used to show that a popular goodness criterion for the Bayesian networks has a severe sensitivity problem. The dissertation then proposes an information theoretic criterion that avoids the problem.
Location of concentrators in a computer communication network: a stochastic automation search method
Resumo:
The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.
Resumo:
Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and security-related anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose "packet capture" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions.
Resumo:
The Distributed Network Protocol v3.0 (DNP3) is one of the most widely used protocols to control national infrastructure. The move from point-to-point serial connections to Ethernet-based network architectures, allowing for large and complex critical infrastructure networks. However, networks and con- figurations change, thus auditing tools are needed to aid in critical infrastructure network discovery. In this paper we present a series of intrusive techniques used for reconnaissance on DNP3 critical infrastructure. Our algorithms will discover DNP3 outstation slaves along with their DNP3 addresses, their corresponding master, and class object configurations. To validate our presented DNP3 reconnaissance algorithms and demonstrate it’s practicality, we present an implementation of a software tool using a DNP3 plug-in for Scapy. Our implementation validates the utility of our DNP3 reconnaissance technique. Our presented techniques will be useful for penetration testing, vulnerability assessments and DNP3 network discovery.
Resumo:
Amateurs are found in arts, sports, or entertainment, where they are linked with professional counterparts and inspired by celebrities. Despite the growing number of CSCW studies in amateur and professional domains, little is known about how technologies facilitate collaboration between these groups. Drawing from a 1.5-year field study in the domain of bodybuilding, this paper describes the collaboration between and within amateurs, professionals, and celebrities on social network sites. Social network sites help individuals to improve their performance in competitions, extend their support network, and gain recognition for their achievements. The findings show that amateurs benefit the most from online collaboration, whereas collaboration shifts from social network sites to offline settings as individuals develop further in their professional careers. This shift from online to offline settings constitutes a novel finding, which extends previous work on social network sites that has looked at groups of amateurs and professionals in isolation. As a contribution to practice, we highlight design factors that address this shift to offline settings and foster collaboration between and within groups.