935 resultados para Multistage Transmission Network
Resumo:
The relationship for the relaxation time(s) of a chemical reaction in terms of concentrations and rate constants has been derived from the network thermodynamic approach developed by Oster, Perelson, and Katchalsky.Generally, it is necessary to draw the bond graph and the “network analogue” of the reaction scheme, followed by loop or nodal analysis of the network and finally solving of the resulting differential equations. In the case of single-step reactions, however, it is possible to obtain an expression for the relaxation time. This approach is simpler and elegant and has certain advantages over the usual kinetic method. The method has been illustrated by taking different reaction schemes as examples.
Resumo:
Pteropid bats or flying-foxes (Chiroptera: Pteropodidae) are the natural host of Hendra virus (HeV) which sporadically causes fatal disease in horses and humans in eastern Australia. While there is strong evidence that urine is an important infectious medium that likely drives bat to bat transmission and bat to horse transmission, there is uncertainty about the relative importance of alternative routes of excretion such as nasal and oral secretions, and faeces. Identifying the potential routes of HeV excretion in flying-foxes is important to effectively mitigate equine exposure risk at the bat-horse interface, and in determining transmission rates in host-pathogen models. The aim of this study was to identify the major routes of HeV excretion in naturally infected flying-foxes, and secondarily, to identify between-species variation in excretion prevalence. A total of 2840 flying-foxes from three of the four Australian mainland species (Pteropus alecto, P. poliocephalus and P. scapulatus) were captured and sampled at multiple roost locations in the eastern states of Queensland and New South Wales between 2012 and 2014. A range of biological samples (urine and serum, and urogenital, nasal, oral and rectal swabs) were collected from anaesthetized bats, and tested for HeV RNA using a qRT-PCR assay targeting the M gene. Forty-two P. alecto (n = 1410) had HeV RNA detected in at least one sample, and yielded a total of 78 positive samples, at an overall detection rate of 1.76% across all samples tested in this species (78/4436). The rate of detection, and the amount of viral RNA, was highest in urine samples (>serum, packed haemocytes >faecal >nasal >oral), identifying urine as the most plausible source of infection for flying-foxes and for horses. Detection in a urine sample was more efficient than detection in urogenital swabs, identifying the former as the preferred diagnostic sample. The detection of HeV RNA in serum is consistent with haematogenous spread, and with hypothesised latency and recrudesence in flying-foxes. There were no detections in P. poliocephalus (n = 1168 animals; n = 2958 samples) or P. scapulatus (n = 262 animals; n = 985 samples), suggesting (consistent with other recent studies) that these species are epidemiologically less important than P. alecto in HeV infection dynamics. The study is unprecedented in terms of the individual animal approach, the large sample size, and the use of a molecular assay to directly determine infection status. These features provide a high level of confidence in the veracity of our findings, and a sound basis from which to more precisely target equine risk mitigation strategies.
Resumo:
Diseases caused by Tobacco streak virus (TSV) have resulted in significant crop losses in sunflower and mung bean crops in Australia. Two genetically distinct strains from central Queensland, TSV-parthenium and TSV-crownbeard, have been previously described. They share only 81% total-genome nucleotide sequence identity and have distinct major alternative hosts, Parthenium hysterophorus (parthenium) and Verbesina encelioides (crownbeard). We developed and used strain-specific multiplex Polymerase chain reactions (PCRs) for the three RNA segments of TSV-parthenium and TSV-crownbeard to accurately characterise the strains naturally infecting 41 hosts species. Hosts included species from 11 plant families, including 12 species endemic to Australia. Results from field surveys and inoculation tests indicate that parthenium is a poor host of TSV-crownbeard. By contrast, crownbeard was both a natural host of, and experimentally infected by TSV-parthenium but this infection combination resulted in non-viable seed. These differences appear to be an effective biological barrier that largely restricts these two TSV strains to their respective major alternative hosts. TSV-crownbeard was seed transmitted from naturally infected crownbeard at a rate of between 5% and 50% and was closely associated with the geographical distribution of crownbeard in central Queensland. TSV-parthenium and TSV-crownbeard were also seed transmitted in experimentally infected ageratum (Ageratum houstonianum) at rates of up to 40% and 27%, respectively. The related subgroup 1 ilarvirus, Ageratum latent virus, was also seed transmitted at a rate of 18% in ageratum which is its major alternative host. Thrips species Frankliniella schultzei and Microcephalothrips abdominalis were commonly found in flowers of TSV-affected crops and nearby weed hosts. Both species readily transmitted TSV-parthenium and TSV-crownbeard. The results are discussed in terms of how two genetically and biologically distinct TSV strains have similar life cycle strategies in the same environment.
Resumo:
Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.
Resumo:
The ever expanding growth of the wireless access to the Internet in recent years has led to the proliferation of wireless and mobile devices to connect to the Internet. This has created the possibility of mobile devices equipped with multiple radio interfaces to connect to the Internet using any of several wireless access network technologies such as GPRS, WLAN and WiMAX in order to get the connectivity best suited for the application. These access networks are highly heterogeneous and they vary widely in their characteristics such as bandwidth, propagation delay and geographical coverage. The mechanism by which a mobile device switches between these access networks during an ongoing connection is referred to as vertical handoff and it often results in an abrupt and significant change in the access link characteristics. The most common Internet applications such as Web browsing and e-mail make use of the Transmission Control Protocol (TCP) as their transport protocol and the behaviour of TCP depends on the end-to-end path characteristics such as bandwidth and round-trip time (RTT). As the wireless access link is most likely the bottleneck of a TCP end-to-end path, the abrupt changes in the link characteristics due to a vertical handoff may affect TCP behaviour adversely degrading the performance of the application. The focus of this thesis is to study the effect of a vertical handoff on TCP behaviour and to propose algorithms that improve the handoff behaviour of TCP using cross-layer information about the changes in the access link characteristics. We begin this study by identifying the various problems of TCP due to a vertical handoff based on extensive simulation experiments. We use this study as a basis to develop cross-layer assisted TCP algorithms in handoff scenarios involving GPRS and WLAN access networks. We then extend the scope of the study by developing cross-layer assisted TCP algorithms in a broader context applicable to a wide range of bandwidth and delay changes during a handoff. And finally, the algorithms developed here are shown to be easily extendable to the multiple-TCP flow scenario. We evaluate the proposed algorithms by comparison with standard TCP (TCP SACK) and show that the proposed algorithms are effective in improving TCP behavior in vertical handoff involving a wide range of bandwidth and delay of the access networks. Our algorithms are easy to implement in real systems and they involve modifications to the TCP sender algorithm only. The proposed algorithms are conservative in nature and they do not adversely affect the performance of TCP in the absence of cross-layer information.
Resumo:
This doctoral dissertation introduces an algorithm for constructing the most probable Bayesian network from data for small domains. The algorithm is used to show that a popular goodness criterion for the Bayesian networks has a severe sensitivity problem. The dissertation then proposes an information theoretic criterion that avoids the problem.
Resumo:
The TCP protocol is used by most Internet applications today, including the recent mobile wireless terminals that use TCP for their World-Wide Web, E-mail and other traffic. The recent wireless network technologies, such as GPRS, are known to cause delay spikes in packet transfer. This causes unnecessary TCP retransmission timeouts. This dissertation proposes a mechanism, Forward RTO-Recovery (F-RTO) for detecting the unnecessary TCP retransmission timeouts and thus allow TCP to take appropriate follow-up actions. We analyze a Linux F-RTO implementation in various network scenarios and investigate different alternatives to the basic algorithm. The second part of this dissertation is focused on quickly adapting the TCP's transmission rate when the underlying link characteristics change suddenly. This can happen, for example, due to vertical hand-offs between GPRS and WLAN wireless technologies. We investigate the Quick-Start algorithm that, in collaboration with the network routers, aims to quickly probe the available bandwidth on a network path, and allow TCP's congestion control algorithms to use that information. By extensive simulations we study the different router algorithms and parameters for Quick-Start, and discuss the challenges Quick-Start faces in the current Internet. We also study the performance of Quick-Start when applied to vertical hand-offs between different wireless link technologies.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The International Journal of Critical Indigenous Studies (IJCIS) now complements the recently launched National Indigenous Research and Knowledges Network (NIRAKN) in its efforts to build Indigenous research capacity. In this context the journal provides a platform for the research of Indigenous postgraduates, early- to mid-career researchers, and senior scholars. Indigenous scholars are therefore encouraged to submit their articles to future editions of the IJCIS, an ‘Excellence in Research for Australia’ (ERA) ranked journal.
Resumo:
A monostable multivibrator configuration using a new technique of regenerative feedback is discussed. This circuit provides an elegant alternative in situations wherein several monostable multivibrators have to be connected in tandem.
Location of concentrators in a computer communication network: a stochastic automation search method
Resumo:
The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.
Resumo:
Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and security-related anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose "packet capture" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions.
Resumo:
The Distributed Network Protocol v3.0 (DNP3) is one of the most widely used protocols to control national infrastructure. The move from point-to-point serial connections to Ethernet-based network architectures, allowing for large and complex critical infrastructure networks. However, networks and con- figurations change, thus auditing tools are needed to aid in critical infrastructure network discovery. In this paper we present a series of intrusive techniques used for reconnaissance on DNP3 critical infrastructure. Our algorithms will discover DNP3 outstation slaves along with their DNP3 addresses, their corresponding master, and class object configurations. To validate our presented DNP3 reconnaissance algorithms and demonstrate it’s practicality, we present an implementation of a software tool using a DNP3 plug-in for Scapy. Our implementation validates the utility of our DNP3 reconnaissance technique. Our presented techniques will be useful for penetration testing, vulnerability assessments and DNP3 network discovery.
Resumo:
Amateurs are found in arts, sports, or entertainment, where they are linked with professional counterparts and inspired by celebrities. Despite the growing number of CSCW studies in amateur and professional domains, little is known about how technologies facilitate collaboration between these groups. Drawing from a 1.5-year field study in the domain of bodybuilding, this paper describes the collaboration between and within amateurs, professionals, and celebrities on social network sites. Social network sites help individuals to improve their performance in competitions, extend their support network, and gain recognition for their achievements. The findings show that amateurs benefit the most from online collaboration, whereas collaboration shifts from social network sites to offline settings as individuals develop further in their professional careers. This shift from online to offline settings constitutes a novel finding, which extends previous work on social network sites that has looked at groups of amateurs and professionals in isolation. As a contribution to practice, we highlight design factors that address this shift to offline settings and foster collaboration between and within groups.
Resumo:
Research on social network sites has examined how people integrate offline and online life, but with a particular emphasis on their use by friendship groups. We extend earlier work by examining a case in which offline ties are non-existent, but online ties strong. Our case is a study of bodybuilders, who explore their passion with like-minded offline 'strangers' in tightly integrated online communities. We show that the integration of offline and online life supports passion-centric activities, such as bodybuilding.