872 resultados para Network cost allocation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Strike-slip faults commonly display structurally complex areas of positive or negative topography. Understanding the development of such areas has important implications for earthquake studies and hydrocarbon exploration. Previous workers identified the key factors controlling the occurrence of both topographic modes and the related structural styles. Kinematic and stress boundary conditions are of first-order relevance. Surface mass transport and material properties affect fault network structure. Experiments demonstrate that dilatancy can generate positive topography even under simple-shear boundary conditions. Here, we use physical models with sand to show that the degree of compaction of the deformed rocks alone can determine the type of topography and related surface fault network structure in simple-shear settings. In our experiments, volume changes of ∼5% are sufficient to generate localized uplift or subsidence. We discuss scalability of model volume changes and fault network structure and show that our model fault zones satisfy geometrical similarity with natural flower structures. Our results imply that compaction may be an important factor in the development of topography and fault network structure along strike-slip faults in sedimentary basins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dear Editor We thank Dr Klek for his interest in our article and giving us the opportunity to clarify our study and share our thoughts. Our study looks at the prevalence of malnutrition in an acute tertiary hospital and tracked the outcomes prospectively.1 There are a number of reasons why we chose Subjective Global Assessment (SGA) to determine the nutritional status of patients. Firstly, we took the view that nutrition assessment tools should be used to determine nutrition status and diagnose presence and severity of malnutrition; whereas the purpose of nutrition screening tools are to identify individuals who are at risk of malnutrition. Nutritional assessment rather than screening should be used as the basis for planning and evaluating nutrition interventions for those diagnosed with malnutrition. Secondly, Subjective Global Assessment (SGA) has been well accepted and validated as an assessment tool to diagnose the presence and severity of malnutrition in clinical practice.2, 3 It has been used in many studies as a valid prognostic indicator of a range of nutritional and clinical outcomes.4, 5, 6 On the other hand, Malnutrition Universal Screening Tool (MUST)7 and Nutrition Risk Screening 2002 (NRS 2002)8 have been established as screening rather than assessment tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the decision-making of multi-area ATC (Available Transfer Capacity) in electricity market environment, the existing resources of transmission network should be optimally dispatched and coordinately employed on the premise that the secure system operation is maintained and risk associated is controllable. The non-sequential Monte Carlo simulation is used to determine the ATC probability density distribution of specified areas under the influence of several uncertainty factors, based on which, a coordinated probabilistic optimal decision-making model with the maximal risk benefit as its objective is developed for multi-area ATC. The NSGA-II is applied to calculate the ATC of each area, which considers the risk cost caused by relevant uncertainty factors and the synchronous coordination among areas. The essential characteristics of the developed model and the employed algorithm are illustrated by the example of IEEE 118-bus test system. Simulative result shows that, the risk of multi-area ATC decision-making is influenced by the uncertainties in power system operation and the relative importance degrees of different areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hepatitis C, which was first identified in 1988, has become an important issue for public health as epidemiological and clinical evidence has emerged. These disciplines have highlighted the extent of infection and its medical consequences. Now, governments at both the state and federal levels are sifting through this evidence and are attempting to create structures to deal with the problem of hepatitis C. These structures have generally taken the form of expert committees and working parties organised from established medical, scientific and public health bodies...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to contribute to the sociology-of-science type of accounting literature, addressing how accounting knowledge is established, advanced and extended. Design/methodology/approach – The research question is answered through the example of research into linkages between accounting and religion. Adopting an actor-network theory (ANT) approach, the paper follows the actors involved in the construction of accounting as an academic discipline through the controversies in which they engage to develop knowledge. Findings – The paper reveals that accounting knowledge is established, advanced and developed through the ongoing mobilisation of nonhumans (journals) who can enrol other humans and nonhumans. It shows that knowledge advancement, establishment and development is more contingent on network breadth than on research paradigms, which appear as side-effects of positioning vis-a-vis a community. Originality/value – The originality of this paper is twofold. First, ANT is applied to accounting knowledge, whereas the accounting literature applies it to the spread of management accounting ideas, methods and practices. Second, an original methodology for data collection is developed by inviting authors from the network to give a reflexive account of their writings at the time they joined the network. Well diffused in sociology and philosophy, such an approach is, albeit, original in accounting research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding network traffic behaviour is crucial for managing and securing computer networks. One important technique is to mine frequent patterns or association rules from analysed traffic data. On the one hand, association rule mining usually generates a huge number of patterns and rules, many of them meaningless or user-unwanted; on the other hand, association rule mining can miss some necessary knowledge if it does not consider the hierarchy relationships in the network traffic data. Aiming to address such issues, this paper proposes a hybrid association rule mining method for characterizing network traffic behaviour. Rather than frequent patterns, the proposed method generates non-similar closed frequent patterns from network traffic data, which can significantly reduce the number of patterns. This method also proposes to derive new attributes from the original data to discover novel knowledge according to hierarchy relationships in network traffic data and user interests. Experiments performed on real network traffic data show that the proposed method is promising and can be used in real applications. Copyright2013 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study seeks insights into the economic consequences of accounting conservatism by examining the relation between conservatism and cost of equity capital. Appealing to the analytical and empirical literatures, we posit an inverse relation. Importantly, we also posit that the strength of the relation is conditional on the firm’s information environment, being the strongest for firms with high information asymmetry and the weakest (potentially negligible) for firms with low information asymmetry. Based on a sample of US-listed entities, we find, as predicted, an inverse relation between conservatism and the cost of equity capital, but further, that this relation is diminished for firms with low information asymmetry environments. This evidence indicates that there are economic benefits associated with the adoption of conservative reporting practices and leads us to conclude that conservatism has a positive role in accounting principles and practices, despite its increasing rejection by accounting standard setters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explores how governance networks prioritise and engage with their stakeholders, by studying three exemplars of “Regional Road Group” governance networks in Queensland, Australia. In the context of managing regionally significant road works programs, stakeholder prioritisation is a complex activity which is unlikely to influence interactions with stakeholders outside of the network. However, stakeholder priority is more likely to influence stakeholder interactions within the networks themselves. Both stakeholder prioritisation and engagement are strongly influenced by the way that the networks are managed, and in particular network operating rules and continuing access to resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Effective management of multi-resistant organisms is an important issue for hospitals both in Australia and overseas. This study investigates the utility of using Bayesian Network (BN) analysis to examine relationships between risk factors and colonization with Vancomycin Resistant Enterococcus (VRE). Design: Bayesian Network Analysis was performed using infection control data collected over a period of 36 months (2008-2010). Setting: Princess Alexandra Hospital (PAH), Brisbane. Outcome of interest: Number of new VRE Isolates Methods: A BN is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). BN enables multiple interacting agents to be studied simultaneously. The initial BN model was constructed based on the infectious disease physician‟s expert knowledge and current literature. Continuous variables were dichotomised by using third quartile values of year 2008 data. BN was used to examine the probabilistic relationships between VRE isolates and risk factors; and to establish which factors were associated with an increased probability of a high number of VRE isolates. Software: Netica (version 4.16). Results: Preliminary analysis revealed that VRE transmission and VRE prevalence were the most influential factors in predicting a high number of VRE isolates. Interestingly, several factors (hand hygiene and cleaning) known through literature to be associated with VRE prevalence, did not appear to be as influential as expected in this BN model. Conclusions: This preliminary work has shown that Bayesian Network Analysis is a useful tool in examining clinical infection prevention issues, where there is often a web of factors that influence outcomes. This BN model can be restructured easily enabling various combinations of agents to be studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is part of a major project with a stimulus that rose from the need to manage a large number of ageing bridges in low traffic volume roads (LTVR) in Australia. The project investigated, designed and consequently constructed, involved replacing an ageing super-structure of a 10m span bridge with a disused Flat-bed Rail Wagon (FRW). This research, therefore, is developed on the premises that the FRW can be adopted as the main structural system for the bridges in LTVR network. The main focus of this research is to present two alternate deck wearing systems (DWS) as part of the design of the FRW as road bridge deck conforming to AS5100 (2004). The bare FRW structural components were first examined for their adequacy (ultimate and serviceability) in resisting the critical loads specified in AS5100(2004). Two options of DWSs were evaluated and their effects on the FRW examined. The first option involved usage of timber DWS; the idea of this option was to use all the primary and secondary members of the FRW in load sharing and to provide additional members where weaknesses in the original members arose. The second option involved usage of reinforced concrete DWS with only the primary members of the FRW sharing the AS5100 (2004) loading. This option inherently minimised the risk associated with any uncertainty of the secondary members to their structural adequacy. This thesis reports the design phases of both options with conclusions of the selection of the ideal option for better structural performance, ease of construction and cost. The comparison carried out here focuses on the distribution of the traffic load by the FRW as a superstructure. Advantages and disadvantages highlighting cost comparisons and ease of constructability of the two systems are also included.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document provides data for the case study presented in our recent earthwork planning papers. Some results are also provided in a graphical format using Excel.