32 resultados para Safety critical applications
em Aston University Research Archive
Resumo:
Reliability of power converters is of crucial importance in switched reluctance motor drives used for safety-critical applications. Open-circuit faults in power converters will cause the motor to run in unbalanced states, and if left untreated, they will lead to damage to the motor and power modules, and even cause a catastrophic failure of the whole drive system. This study is focused on using a single current sensor to detect open-circuit faults accurately. An asymmetrical half-bridge converter is considered in this study and the faults of single-phase open and two-phase open are analysed. Three different bus positions are defined. On the basis of a fast Fourier transform algorithm with Blackman window interpolation, the bus current spectrums before and after open-circuit faults are analysed in details. Their fault characteristics are extracted accurately by the normalisations of the phase fundamental frequency component and double phase fundamental frequency component, and the fault characteristics of the three bus detection schemes are also compared. The open-circuit faults can be located by finding the relationship between the bus current and rotor position. The effectiveness of the proposed diagnosis method is validated by the simulation results and experimental tests.
Resumo:
In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.
Resumo:
There is an increasing emphasis on the use of software to control safety critical plants for a wide area of applications. The importance of ensuring the correct operation of such potentially hazardous systems points to an emphasis on the verification of the system relative to a suitably secure specification. However, the process of verification is often made more complex by the concurrency and real-time considerations which are inherent in many applications. A response to this is the use of formal methods for the specification and verification of safety critical control systems. These provide a mathematical representation of a system which permits reasoning about its properties. This thesis investigates the use of the formal method Communicating Sequential Processes (CSP) for the verification of a safety critical control application. CSP is a discrete event based process algebra which has a compositional axiomatic semantics that supports verification by formal proof. The application is an industrial case study which concerns the concurrent control of a real-time high speed mechanism. It is seen from the case study that the axiomatic verification method employed is complex. It requires the user to have a relatively comprehensive understanding of the nature of the proof system and the application. By making a series of observations the thesis notes that CSP possesses the scope to support a more procedural approach to verification in the form of testing. This thesis investigates the technique of testing and proposes the method of Ideal Test Sets. By exploiting the underlying structure of the CSP semantic model it is shown that for certain processes and specifications the obligation of verification can be reduced to that of testing the specification over a finite subset of the behaviours of the process.
Resumo:
Human Resource (HR) systems and practices generally referred to as High Performance Work Practices (HPWPs), (Huselid, 1995) (sometimes termed High Commitment Work Practices or High Involvement Work Practices) have attracted much research attention in past decades. Although many conceptualizations of the construct have been proposed, there is general agreement that HPWPs encompass a bundle or set of HR practices including sophisticated staffing, intensive training and development, incentive-based compensation, performance management, initiatives aimed at increasing employee participation and involvement, job safety and security, and work design (e.g. Pfeffer, 1998). It is argued that these practices either directly and indirectly influence the extent to which employees’ knowledge, skills, abilities, and other characteristics are utilized in the organization. Research spanning nearly 20 years has provided considerable empirical evidence for relationships between HPWPs and various measures of performance including increased productivity, improved customer service, and reduced turnover (e.g. Guthrie, 2001; Belt & Giles, 2009). With the exception of a few papers (e.g., Laursen &Foss, 2003), this literature appears to lack focus on how HPWPs influence or foster more innovative-related attitudes and behaviours, extra role behaviors, and performance. This situation exists despite the vast evidence demonstrating the importance of innovation, proactivity, and creativity in its various forms to individual, group, and organizational performance outcomes. Several pertinent issues arise when considering HPWPs and their relationship to innovation and performance outcomes. At a broad level is the issue of which HPWPs are related to which innovation-related variables. Another issue not well identified in research relates to employees’ perceptions of HPWPs: does an employee actually perceive the HPWP –outcomes relationship? No matter how well HPWPs are designed, if they are not perceived and experienced by employees to be effective or worthwhile then their likely success in achieving positive outcomes is limited. At another level, research needs to consider the mechanisms through which HPWPs influence –innovation and performance. The research question here relates to what possible mediating variables are important to the success or failure of HPWPs in impacting innovative behaviours and attitudes and what are the potential process considerations? These questions call for theory refinement and the development of more comprehensive models of the HPWP-innovation/performance relationship that include intermediate linkages and boundary conditions (Ferris, Hochwarter, Buckley, Harrell-Cook, & Frink, 1999). While there are many calls for this type of research to be made a high priority, to date, researchers have made few inroads into answering these questions. This symposium brings together researchers from Australia, Europe, Asia and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a HPWP and potential variables that can facilitate or hinder the effects of these practices on innovation- and performance- related outcomes. The first paper by Johnston and Becker explores the HPWPs in relation to work design in a disaster response organization that shifts quickly from business as usual to rapid response. The researchers examine how the enactment of the organizational response is devolved to groups and individuals. Moreover, they assess motivational characteristics that exist in dual work designs (normal operations and periods of disaster activation) and the implications for innovation. The second paper by Jørgensen reports the results of an investigation into training and development practices and innovative work behaviors (IWBs) in Danish organizations. Research on how to design and implement training and development initiatives to support IWBs and innovation in general is surprisingly scant and often vague. This research investigates the mechanisms by which training and development initiatives influence employee behaviors associated with innovation, and provides insights into how training and development can be used effectively by firms to attract and retain valuable human capital in knowledge-intensive firms. The next two papers in this symposium consider the role of employee perceptions of HPWPs and their relationships to innovation-related variables and performance. First, Bish and Newton examine perceptions of the characteristics and awareness of occupational health and safety (OHS) practices and their relationship to individual level adaptability and proactivity in an Australian public service organization. The authors explore the role of perceived supportive and visionary leadership and its impact on the OHS policy-adaptability/proactivity relationship. The study highlights the positive main effects of awareness and characteristics of OHS polices, and supportive and visionary leadership on individual adaptability and proactivity. It also highlights the important moderating effects of leadership in the OHS policy-adaptability/proactivity relationship. Okhawere and Davis present a conceptual model developed for a Nigerian study in the safety-critical oil and gas industry that takes a multi-level approach to the HPWP-safety relationship. Adopting a social exchange perspective, they propose that at the organizational level, organizational climate for safety mediates the relationship between enacted HPWS’s and organizational safety performance (prescribed and extra role performance). At the individual level, the experience of HPWP impacts on individual behaviors and attitudes in organizations, here operationalized as safety knowledge, skills and motivation, and these influence individual safety performance. However these latter relationships are moderated by organizational climate for safety. A positive organizational climate for safety strengthens the relationship between individual safety behaviors and attitudes and individual-level safety performance, therefore suggesting a cross-level boundary condition. The model includes both safety performance (behaviors) and organizational level safety outcomes, operationalized as accidents, injuries, and fatalities. The final paper of this symposium by Zhang and Liu explores leader development and relationship between transformational leadership and employee creativity and innovation in China. The authors further develop a model that incorporates the effects of extrinsic motivation (pay for performance: PFP) and employee collectivism in the leader-employee creativity relationship. The papers’ contributions include the incorporation of a PFP effect on creativity as moderator, rather than predictor in most studies; the exploration of the PFP effect from both fairness and strength perspectives; the advancement of knowledge on the impact of collectivism on the leader- employee creativity link. Last, this is the first study to examine three-way interactional effects among leader-member exchange (LMX), PFP and collectivism, thus, enriches our understanding of promoting employee creativity. In conclusion, this symposium draws upon the findings of four empirical studies and one conceptual study to provide an insight into understanding how different variables facilitate or potentially hinder the influence various HPWPs on innovation and performance. We will propose a number of questions for further consideration and discussion. The symposium will address the Conference Theme of ‘Capitalism in Question' by highlighting how HPWPs can promote financial health and performance of organizations while maintaining a high level of regard and respect for employees and organizational stakeholders. Furthermore, the focus on different countries and cultures explores the overall research question in relation to different modes or stages of development of capitalism.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
The behaviour of control functions in safety critical software systems is typically bounded to prevent the occurrence of known system level hazards. These bounds are typically derived through safety analyses and can be implemented through the use of necessary design features. However, the unpredictability of real world problems can result in changes in the operating context that may invalidate the behavioural bounds themselves, for example, unexpected hazardous operating contexts as a result of failures or degradation. For highly complex problems it may be infeasible to determine the precise desired behavioural bounds of a function that addresses or minimises risk for hazardous operation cases prior to deployment. This paper presents an overview of the safety challenges associated with such a problem and how such problems might be addressed. A self-management framework is proposed that performs on-line risk management. The features of the framework are shown in context of employing intelligent adaptive controllers operating within complex and highly dynamic problem domains such as Gas-Turbine Aero Engine control. Safety assurance arguments enabled by the framework necessary for certification are also outlined.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
Combining the results of classifiers has shown much promise in machine learning generally. However, published work on combining text categorizers suggests that, for this particular application, improvements in performance are hard to attain. Explorative research using a simple voting system is presented and discussed in the light of a probabilistic model that was originally developed for safety critical software. It was found that typical categorization approaches produce predictions which are too similar for combining them to be effective since they tend to fail on the same records. Further experiments using two less orthodox categorizers are also presented which suggest that combining text categorizers can be successful, provided the essential element of ‘difference’ is considered.
Resumo:
The use of engineering materials in critical applications necessitates the accurate prediction of component lifetime for inspection and renewal purposes. In fatigue limited situations, it is necessary to be able to predict the growth rates of cracks from initiation at a defect through to final fracture. To this end, fatigue crack growth data are presented for different microstructures of typical nickel base superalloys used in gas turbine engines. Crack growth behaviour throughout the life history of the crack, i.e. from the short crack through to the long crack propagation regime, is described for each microstructural condition and discussed in terms of current theories of fatigue crack propagation.
Resumo:
Insulated gate bipolar transistor (IGBT) modules are important safety critical components in electrical power systems. Bond wire lift-off, a plastic deformation between wire bond and adjacent layers of a device caused by repeated power/thermal cycles, is the most common failure mechanism in IGBT modules. For the early detection and characterization of such failures, it is important to constantly detect or monitor the health state of IGBT modules, and the state of bond wires in particular. This paper introduces eddy current pulsed thermography (ECPT), a nondestructive evaluation technique, for the state detection and characterization of bond wire lift-off in IGBT modules. After the introduction of the experimental ECPT system, numerical simulation work is reported. The presented simulations are based on the 3-D electromagnetic-thermal coupling finite-element method and analyze transient temperature distribution within the bond wires. This paper illustrates the thermal patterns of bond wires using inductive heating with different wire statuses (lifted-off or well bonded) under two excitation conditions: nonuniform and uniform magnetic field excitations. Experimental results show that uniform excitation of healthy bonding wires, using a Helmholtz coil, provides the same eddy currents on each, while different eddy currents are seen on faulty wires. Both experimental and numerical results show that ECPT can be used for the detection and characterization of bond wires in power semiconductors through the analysis of the transient heating patterns of the wires. The main impact of this paper is that it is the first time electromagnetic induction thermography, so-called ECPT, has been employed on power/electronic devices. Because of its capability of contactless inspection of multiple wires in a single pass, and as such it opens a wide field of investigation in power/electronic devices for failure detection, performance characterization, and health monitoring.
Resumo:
Congestion control is critical for the provisioning of quality of services (QoS) over dedicated short range communications (DSRC) vehicle networks for road safety applications. In this paper we propose a congestion control method for DSRC vehicle networks at road intersection, with the aims of providing high availability and low latency channels for high priority emergency safety applications while maximizing channel utilization for low priority routine safety applications. In this method a offline simulation based approach is used to find out the best possible configurations of message rate and MAC layer backoff exponent (BE) for a given number of vehicles equipped with DSRC radios. The identified best configurations are then used online by an roadside access point (AP) for system operation. Simulation results demonstrated that this adaptive method significantly outperforms the fixed control method under varying number of vehicles. The impact of estimation error on the number of vehicles in the network on system level performance is also investigated.
Resumo:
Quality of services (QoS) support is critical for dedicated short range communications (DSRC) vehicle networks based collaborative road safety applications. In this paper we propose an adaptive power and message rate control method for DSRC vehicle networks at road intersections. The design objective is to provide high availability and low latency channels for high priority emergency safety applications while maximizing channel utilization for low priority routine safety applications. In this method an offline simulation based approach is used to find out the best possible configurations of transmit power and message rate for given numbers of vehicles in the network. The identified best configurations are then used online by roadside access points (AP) according to estimated number of vehicles. Simulation results show that this adaptive method significantly outperforms a fixed control method. © 2011 Springer-Verlag.
Resumo:
Dedicated short range communications (DSRC) has been regarded as one of the most promising technologies to provide robust communications for large scale vehicle networks. It is designed to support both road safety and commercial applications. Road safety applications will require reliable and timely wireless communications. However, as the medium access control (MAC) layer of DSRC is based on the IEEE 802.11 distributed coordination function (DCF), it is well known that the random channel access based MAC cannot provide guaranteed quality of services (QoS). It is very important to understand the quantitative performance of DSRC, in order to make better decisions on its adoption, control, adaptation, and improvement. In this paper, we propose an analytic model to evaluate the DSRC-based inter-vehicle communication. We investigate the impacts of the channel access parameters associated with the different services including arbitration inter-frame space (AIFS) and contention window (CW). Based on the proposed model, we analyze the successful message delivery ratio and channel service delay for broadcast messages. The proposed analytical model can provide a convenient tool to evaluate the inter-vehicle safety applications and analyze the suitability of DSRC for road safety applications.
Resumo:
Dedicated Short Range Communication (DSRC) is a promising technique for vehicle ad-hoc network (VANET) and collaborative road safety applications. As road safety applications require strict quality of services (QoS) from the VANET, it is crucial for DSRC to provide timely and reliable communications to make safety applications successful. In this paper we propose two adaptive message rate control algorithms for low priority safety messages, in order to provide highly available channel for high priority emergency messages while improve channel utilization. In the algorithms each vehicle monitors channel loads and independently controls message rate by a modified additive increase and multiplicative decrease (AIMD) method. Simulation results demonstrated the effectiveness of the proposed rate control algorithms in adapting to dynamic traffic load.