8 resultados para High-Level Petri Nets
em Aston University Research Archive
Resumo:
We address the question of how to obtain effective fusion of identification information such that it is robust to the quality of this information. As well as technical issues data fusion is encumbered with a collection of (potentially confusing) practical considerations. These considerations are described during the early chapters in which a framework for data fusion is developed. Following this process of diversification it becomes clear that the original question is not well posed and requires more precise specification. We use the framework to focus on some of the technical issues relevant to the question being addressed. We show that fusion of hard decisions through use of an adaptive version of the maximum a posteriori decision rule yields acceptable performance. Better performance is possible using probability level fusion as long as the probabilities are accurate. Of particular interest is the prevalence of overconfidence and the effect it has on fused performance. The production of accurate probabilities from poor quality data forms the latter part of the thesis. Two approaches are taken. Firstly the probabilities may be moderated at source (either analytically or numerically). Secondly, the probabilities may be transformed at the fusion centre. In each case an improvement in fused performance is demonstrated. We therefore conclude that in order to obtain robust fusion care should be taken to model the probabilities accurately; either at the source or centrally.
Resumo:
THE PURPOSE OF THIS ARTICLE is two-fold, first to provide a general overview of two of the main cognitive neuroscientific techniques available, specifically functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS); and secondly to apply these techniques to elaborate a discussion of an aspect of higher level vision, namely implied motion, that is the perception of movement from a static image.
Resumo:
Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
The thesis describes an investigation into methods for the specification, design and implementation of computer control systems for flexible manufacturing machines comprising multiple, independent, electromechanically-driven mechanisms. An analysis is made of the elements of conventional mechanically-coupled machines in order that the operational functions of these elements may be identified. This analysis is used to define the scope of requirements necessary to specify the format, function and operation of a flexible, independently driven mechanism machine. A discussion of how this type of machine can accommodate modern manufacturing needs of high-speed and flexibility is presented. A sequential method of capturing requirements for such machines is detailed based on a hierarchical partitioning of machine requirements from product to independent drive mechanism. A classification of mechanisms using notations, including Data flow diagrams and Petri-nets, is described which supports capture and allows validation of requirements. A generic design for a modular, IDM machine controller is derived based upon hierarchy of control identified in these machines. A two mechanism experimental machine is detailed which is used to demonstrate the application of the specification, design and implementation techniques. A computer controller prototype and a fully flexible implementation for the IDM machine, based on Petri-net models described using the concurrent programming language Occam, is detailed. The ability of this modular computer controller to support flexible, safe and fault-tolerant operation of the two intermittent motion, discrete-synchronisation independent drive mechanisms is presented. The application of the machine development methodology to industrial projects is established.
Resumo:
Human Resource (HR) systems and practices generally referred to as High Performance Work Practices (HPWPs), (Huselid, 1995) (sometimes termed High Commitment Work Practices or High Involvement Work Practices) have attracted much research attention in past decades. Although many conceptualizations of the construct have been proposed, there is general agreement that HPWPs encompass a bundle or set of HR practices including sophisticated staffing, intensive training and development, incentive-based compensation, performance management, initiatives aimed at increasing employee participation and involvement, job safety and security, and work design (e.g. Pfeffer, 1998). It is argued that these practices either directly and indirectly influence the extent to which employees’ knowledge, skills, abilities, and other characteristics are utilized in the organization. Research spanning nearly 20 years has provided considerable empirical evidence for relationships between HPWPs and various measures of performance including increased productivity, improved customer service, and reduced turnover (e.g. Guthrie, 2001; Belt & Giles, 2009). With the exception of a few papers (e.g., Laursen &Foss, 2003), this literature appears to lack focus on how HPWPs influence or foster more innovative-related attitudes and behaviours, extra role behaviors, and performance. This situation exists despite the vast evidence demonstrating the importance of innovation, proactivity, and creativity in its various forms to individual, group, and organizational performance outcomes. Several pertinent issues arise when considering HPWPs and their relationship to innovation and performance outcomes. At a broad level is the issue of which HPWPs are related to which innovation-related variables. Another issue not well identified in research relates to employees’ perceptions of HPWPs: does an employee actually perceive the HPWP –outcomes relationship? No matter how well HPWPs are designed, if they are not perceived and experienced by employees to be effective or worthwhile then their likely success in achieving positive outcomes is limited. At another level, research needs to consider the mechanisms through which HPWPs influence –innovation and performance. The research question here relates to what possible mediating variables are important to the success or failure of HPWPs in impacting innovative behaviours and attitudes and what are the potential process considerations? These questions call for theory refinement and the development of more comprehensive models of the HPWP-innovation/performance relationship that include intermediate linkages and boundary conditions (Ferris, Hochwarter, Buckley, Harrell-Cook, & Frink, 1999). While there are many calls for this type of research to be made a high priority, to date, researchers have made few inroads into answering these questions. This symposium brings together researchers from Australia, Europe, Asia and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a HPWP and potential variables that can facilitate or hinder the effects of these practices on innovation- and performance- related outcomes. The first paper by Johnston and Becker explores the HPWPs in relation to work design in a disaster response organization that shifts quickly from business as usual to rapid response. The researchers examine how the enactment of the organizational response is devolved to groups and individuals. Moreover, they assess motivational characteristics that exist in dual work designs (normal operations and periods of disaster activation) and the implications for innovation. The second paper by Jørgensen reports the results of an investigation into training and development practices and innovative work behaviors (IWBs) in Danish organizations. Research on how to design and implement training and development initiatives to support IWBs and innovation in general is surprisingly scant and often vague. This research investigates the mechanisms by which training and development initiatives influence employee behaviors associated with innovation, and provides insights into how training and development can be used effectively by firms to attract and retain valuable human capital in knowledge-intensive firms. The next two papers in this symposium consider the role of employee perceptions of HPWPs and their relationships to innovation-related variables and performance. First, Bish and Newton examine perceptions of the characteristics and awareness of occupational health and safety (OHS) practices and their relationship to individual level adaptability and proactivity in an Australian public service organization. The authors explore the role of perceived supportive and visionary leadership and its impact on the OHS policy-adaptability/proactivity relationship. The study highlights the positive main effects of awareness and characteristics of OHS polices, and supportive and visionary leadership on individual adaptability and proactivity. It also highlights the important moderating effects of leadership in the OHS policy-adaptability/proactivity relationship. Okhawere and Davis present a conceptual model developed for a Nigerian study in the safety-critical oil and gas industry that takes a multi-level approach to the HPWP-safety relationship. Adopting a social exchange perspective, they propose that at the organizational level, organizational climate for safety mediates the relationship between enacted HPWS’s and organizational safety performance (prescribed and extra role performance). At the individual level, the experience of HPWP impacts on individual behaviors and attitudes in organizations, here operationalized as safety knowledge, skills and motivation, and these influence individual safety performance. However these latter relationships are moderated by organizational climate for safety. A positive organizational climate for safety strengthens the relationship between individual safety behaviors and attitudes and individual-level safety performance, therefore suggesting a cross-level boundary condition. The model includes both safety performance (behaviors) and organizational level safety outcomes, operationalized as accidents, injuries, and fatalities. The final paper of this symposium by Zhang and Liu explores leader development and relationship between transformational leadership and employee creativity and innovation in China. The authors further develop a model that incorporates the effects of extrinsic motivation (pay for performance: PFP) and employee collectivism in the leader-employee creativity relationship. The papers’ contributions include the incorporation of a PFP effect on creativity as moderator, rather than predictor in most studies; the exploration of the PFP effect from both fairness and strength perspectives; the advancement of knowledge on the impact of collectivism on the leader- employee creativity link. Last, this is the first study to examine three-way interactional effects among leader-member exchange (LMX), PFP and collectivism, thus, enriches our understanding of promoting employee creativity. In conclusion, this symposium draws upon the findings of four empirical studies and one conceptual study to provide an insight into understanding how different variables facilitate or potentially hinder the influence various HPWPs on innovation and performance. We will propose a number of questions for further consideration and discussion. The symposium will address the Conference Theme of ‘Capitalism in Question' by highlighting how HPWPs can promote financial health and performance of organizations while maintaining a high level of regard and respect for employees and organizational stakeholders. Furthermore, the focus on different countries and cultures explores the overall research question in relation to different modes or stages of development of capitalism.