891 resultados para presumption of fault


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this research project was to investigate two distinct types of research questions – one theoretical, the other empirical: (1) What would justice mean in the context of the international trade regime? (2.Using the small developing states of the Commonwealth Caribbean as a case study, what do Commonwealth Caribbean trade negotiators mean when they appeal to justice? Regarding the first question, Iris Young's framework which focuses on the achievement of social justice in a domestic context by acknowledging social differences such as those based on race and gender, was adopted and its relevance argued in the international context of interstate trade negotiation so as to validate the notion of (size, location, and governance capacity) difference in this latter context. The point of departure is that while states are typically treated as equals in international law – as are individuals in liberal political theory – there are significant differences between states which warrant different treatment in the international arena. The study found that this re-formulation of justice which takes account of such differences between states, allows for more adequate policy responses than those offered by the presumption of equal treatment. Regarding the second question, this theoretical perspective was used to analyze the understandings of justice from which Commonwealth Caribbean trade negotiators proceed. Interpretive and ethnographic methods, including participant observation, interviews, field notes, and textual analysis, were employed to analyze their understandings of justice. The study found that these negotiators perceive such justice as being justice to difference because of the distinct characteristics of small developing states which combine to constrain their participation in the international trading system; based on this perception, they seek rules and outcomes in the multilateral trade regime which are sensitive to such different characteristics; and while these issues were examined in a specific region, its findings are relevant for other regions consisting of small developing states, such as those in the ACP group.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rapid development in industry have contributed to more complex systems that are prone to failure. In applications where the presence of faults may lead to premature failure, fault detection and diagnostics tools are often implemented. The goal of this research is to improve the diagnostic ability of existing FDD methods. Kernel Principal Component Analysis has good fault detection capability, however it can only detect the fault and identify few variables that have contribution on occurrence of fault and thus not precise in diagnosing. Hence, KPCA was used to detect abnormal events and the most contributed variables were taken out for more analysis in diagnosis phase. The diagnosis phase was done in both qualitative and quantitative manner. In qualitative mode, a networked-base causality analysis method was developed to show the causal effect between the most contributing variables in occurrence of the fault. In order to have more quantitative diagnosis, a Bayesian network was constructed to analyze the problem in probabilistic perspective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.

To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.

To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Effective environmental governance is hampered by the continuing presumption of the state as central actor in the domestic and international political contexts. Over the last 20 years, the traditional 'Westphalian' conception of the sovereign state has come under increasing pressure not only in theory, but also in practice, as evidenced by the increasing importance attributed to the participation of quasi-government and non-government actors in decision-making in domestic and international political issues. This paper is a contribution to the on-going debate about the meaning of effective environmental governance by mapping out a post-Westphalian conception of governance. In particular, it defines governance in relation to the protection of biodiversity; highlights obstacles to effective governance in this area, and discusses forming environmental management plans and environmental governance regimes to implement them. The final section of the paper suggests seven directions for ensuring the realisation of effective environmental governance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation and can also improve productivity and enhance system’s safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. Although a variety of prognostic methodologies have been reported recently, their application in industry is still relatively new and mostly focused on the prediction of specific component degradations. Furthermore, they required significant and sufficient number of fault indicators to accurately prognose the component faults. Hence, sufficient usage of health indicators in prognostics for the effective interpretation of machine degradation process is still required. Major challenges for accurate longterm prediction of remaining useful life (RUL) still remain to be addressed. Therefore, continuous development and improvement of a machine health management system and accurate long-term prediction of machine remnant life is required in real industry application. This thesis presents an integrated diagnostics and prognostics framework based on health state probability estimation for accurate and long-term prediction of machine remnant life. In the proposed model, prior empirical (historical) knowledge is embedded in the integrated diagnostics and prognostics system for classification of impending faults in machine system and accurate probability estimation of discrete degradation stages (health states). The methodology assumes that machine degradation consists of a series of degraded states (health states) which effectively represent the dynamic and stochastic process of machine failure. The estimation of discrete health state probability for the prediction of machine remnant life is performed using the ability of classification algorithms. To employ the appropriate classifier for health state probability estimation in the proposed model, comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault data of three different faults in a high pressure liquefied natural gas (HP-LNG) pump. As a result of this comparison study, SVMs were employed in heath state probability estimation for the prediction of machine failure in this research. The proposed prognostic methodology has been successfully tested and validated using a number of case studies from simulation tests to real industry applications. The results from two actual failure case studies using simulations and experiments indicate that accurate estimation of health states is achievable and the proposed method provides accurate long-term prediction of machine remnant life. In addition, the results of experimental tests show that the proposed model has the capability of providing early warning of abnormal machine operating conditions by identifying the transitional states of machine fault conditions. Finally, the proposed prognostic model is validated through two industrial case studies. The optimal number of health states which can minimise the model training error without significant decrease of prediction accuracy was also examined through several health states of bearing failure. The results were very encouraging and show that the proposed prognostic model based on health state probability estimation has the potential to be used as a generic and scalable asset health estimation tool in industrial machinery.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the similarities and differences between bicycle and motorcycle crashes with other motor vehicles. If similar treatments can be effective for both bicycle and motorcycle crashes, then greater benefits in terms crash costs saved may be possible for the same investment in treatments. To reduce the biases associated with under-reporting of these crashes to police, property damage and minor injury crashes were excluded. The most common crash type for both bicycles (31.1%) and motorcycles (24.5%) was intersection from adjacent approaches. Drivers of other vehicles were coded most at fault in the majority of two-unit bicycle (57.0%) and motorcycle crashes (62.7%). The crash types, patterns of fault and factors affecting fault were generally similar for bicycle and motorcycle crashes. This confirms the need to combat the factors contributing to failure of other drivers to yield right of way to two-wheelers, and suggest that some of these actions should prove beneficial to the safety of both motorized and non-motorized two-wheelers. In contrast, child bicyclists were more often at fault, particularly in crashes involving a vehicle leaving the driveway or footpath. The greater reporting of violations by riders and drivers in motorcycle crashes also deserves further investigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numbers, rates and proportions of those remanded in custody have increased significantly in recent decades across a range of jurisdictions. In Australia they have doubled since the early 1980s, such that close to one in four prisoners is currently unconvicted. Taking NSW as a case study and drawing on the recent New South Wales Law Reform Commission Report on Bail (2012), this article will identify the key drivers of this increase in NSW, predominantly a form of legislative hyperactivity involving constant changes to the Bail Act 1978 (NSW), changes which remove or restrict the presumption in favour of bail for a wide range of offences. The article will then examine some of the conceptual, cultural and practice shifts underlying the increase. These include: a shift away from a conception of bail as a procedural issue predominantly concerned with securing the attendance of the accused at trial and the integrity of the trial, to the use of bail for crime prevention purposes; the diminishing force of the presumption of innocence; the framing of a false opposition between an individual interest in liberty and a public interest in safety; a shift from determination of the individual case by reference to its own particular circumstances to determination by its classification within pre‐set legislative categories of offence types and previous convictions; a double jeopardy effect arising in relation to people with previous convictions for which they have already been punished; and an unacknowledged preventive detention effect arising from the increased emphasis on risk. Many of these conceptual shifts are apparent in the explosion in bail conditions and the KPI‐driven policing of bail conditions and consequent rise in revocations, especially in relation to juveniles. The paper will conclude with a note on the NSW Government’s response to the NSW LRC Report in the form of a Bail Bill (2013) and brief speculation as to its likely effects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Australian Media Law details and explains the complex case law, legislation and regulations governing media practice in areas as diverse as journalism, advertising, multimedia and broadcasting. It examines the issues affecting traditional forms of media such as television, radio, film and newspapers as well as for recent forms such as the internet, online forums and digital technology, in a clear and accessible format. New additions to the fifth edition include: - the implications of new anti-terrorism legislation for journalists; - developments in privacy law, including Law Reform recommendations for a statutory cause of action to protect personal privacy in Australia and the expanding privacy jurisprudence in the United Kingdom and New Zealand; - liability for defamation of internet search engines and service providers; - the High Court decision in Roadshow v iiNet and the position of internet service providers in relation to copyright infringement via their services; - new suppression order regimes; - statutory reforms providing journalists with a rebuttable presumption of non-disclosure when called upon to reveal their sources in a court of law; - recent developments regarding whether journalists can use electronic devices to collect and disseminate information about court proceedings; - contempt committed by jurors via social media; and an examination of recent decisions on defamation, confidentiality, vilification, copyright and contempt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is aimed at reviewing the notion of Byzantine-resilient distributed computing systems, the relevant protocols and their possible applications as reported in the literature. The three agreement problems, namely, the consensus problem, the interactive consistency problem, and the generals problem have been discussed. Various agreement protocols for the Byzantine generals problem have been summarized in terms of their performance and level of fault-tolerance. The three classes of Byzantine agreement protocols discussed are the deterministic, randomized, and approximate agreement protocols. Finally, application of the Byzantine agreement protocols to clock synchronization is highlighted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bypass operation with the aid of a special bypass valve is an important part of present-day schemes of protection for h.v. d.c. transmission systems. In this paper, the possibility of using two valves connected to any phase in the bridge convertor for the purpose of bypass operation is studied. The scheme is based on the use of logic circuits in conjunction with modified methods of fault detection. Analysis of the faults in a d.c. transmission system is carried out with the object of determining the requirements of such a logic-circuit control system. An outline of the scheme for the logic-circuit control of the bypass operation for both rectifier and invertor bridges is then given. Finally, conclusions are drawn regarding the advantages of such a system, which include reduction in the number of valves, prevention of severe faults and fast clearance of faults, in addition to the immediate location of the fault and its nature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a fast and accurate relaying technique for a long 765kv UHV transmission line based on support vector machine. For a long EHV/UHV transmission line with large distributed capacitance, a traditional distance relay which uses a lumped parameter model of the transmission line can cause malfunction of the relay. With a frequency of 1kHz, 1/4th cycle of instantaneous values of currents and voltages of all phases at the relying end are fed to Support Vector Machine(SVM). The SVM detects fault type accurately using 3 milliseconds of post-fault data and reduces the fault clearing time which improves the system stability and power transfer capability. The performance of relaying scheme has been checked with a typical 765kV Indian transmission System which is simulated using the Electromagnetic Transients Program(EMTP) developed by authors in which the distributed parameter line model is used. More than 15,000 different short circuit fault cases are simulated by varying fault location, fault impedance, fault incidence angle and fault type to train the SVM for high speed accurate relaying. Simulation studies have shown that the proposed relay provides fast and accurate protection irrespective of fault location, fault impedance, incidence time of fault and fault type. And also the proposed scheme can be used as augmentation for the existing relaying, particularly for Zone-2, Zone-3 protection.