9 resultados para System Success
em Aston University Research Archive
Resumo:
Conventional project management techniques are not always sufficient to ensure time, cost and quality achievement of large-scale construction projects due to complexity in planning, design and implementation processes. The main reasons for project non-achievement are changes in scope and design, changes in government policies and regulations, unforeseen inflation, underestimation and improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed with the application of risk management throughout the project's life cycle. However, the effectiveness of risk management depends on the technique through which the effects of risk factors are analysed/quantified. This study proposes the Analytic Hierarchy Process (AHP), a multiple attribute decision making technique, as a tool for risk analysis because it can handle subjective as well as objective factors in a decision model that are conflicting in nature. This provides a decision support system (DSS) to project management for making the right decision at the right time for ensuring project success in line with organisation policy, project objectives and a competitive business environment. The whole methodology is explained through a case application of a cross-country petroleum pipeline project in India and its effectiveness in project management is demonstrated.
Resumo:
The development of an information system in Caribbean public sector organisations is usually seen as a matter of installing hardware and software according to a directive from senior management, without much planning. This causes huge investment in procuring hardware and software without improving overall system performance. Increasingly, Caribbean organisations are looking for assurances on information system performance before making investment decisions not only to satisfy the funding agencies, but also to be competitive in this dynamic and global business world. This study demonstrates an information system planning approach using a process-reengineering framework. Firstly, the stakeholders for the business functions are identified along with their relationships and requirements. Secondly, process reengineering is carried out to develop the system requirements. Accordingly, information technology is selected through detailed system requirement analysis. Thirdly, cost-benefit analysis, identification of critical success factors and risk analysis are carried out to strengthen the selection. The entire methodology has been demonstrated through an information system project in the Barbados drug service, a public sector organisation in the Caribbean.
Resumo:
Business process simulation (BPS) is used to evaluate the effect of the redesign of a police road traffic accident (RTA) reporting system. The new system aims to provide timely statistical analysis of traffic behaviour to government bodies and to enable more effective utilisation of traffic police personnel. The simulation method is demonstrated in the context of assisting process change enabled by the use of information systems in an organisation in which there had been a historically mixed pattern of success in this activity.
Resumo:
The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.
Resumo:
In order to survive in the increasingly customer-oriented marketplace, continuous quality improvement marks the fastest growing quality organization’s success. In recent years, attention has been focused on intelligent systems which have shown great promise in supporting quality control. However, only a small number of the currently used systems are reported to be operating effectively because they are designed to maintain a quality level within the specified process, rather than to focus on cooperation within the production workflow. This paper proposes an intelligent system with a newly designed algorithm and the universal process data exchange standard to overcome the challenges of demanding customers who seek high-quality and low-cost products. The intelligent quality management system is equipped with the ‘‘distributed process mining” feature to provide all levels of employees with the ability to understand the relationships between processes, especially when any aspect of the process is going to degrade or fail. An example of generalized fuzzy association rules are applied in manufacturing sector to demonstrate how the proposed iterative process mining algorithm finds the relationships between distributed process parameters and the presence of quality problems.
Resumo:
This research aimed to provide a comparative analysis of South Asian and White British students in their academic attainment at school and university and in their search for employment. Data were gathered by using a variety of methodological techniques. Completed postal questionnaires were received from 301 South Asian and White British undergraduates from 12 British universities, who were in their final year of study in 1985. In depth interviews were also conducted with 49 graduates who were a self selected group from the original sample. Additional information was also collected by using diary report forms and by administering a second postal questionnaire to selected South Asian and White British participants. It was found that while the pre-university qualifications of the White British and South Asian undergraduates did not differ considerably, many members in the latter group had travelled a more arduous path to academic success. For some South Asians, school experiences included the confrontation of racist attitudes and behaviour, both from teachers and peers. The South Asian respondents in this study were more likely than their White British counterparts, to have attempted some C.S.E. examinations, obtained some of their `O' levels in the Sixth Form and retaken their `A' levels. As a result the South Asians were on average older than their White British peers when entering university. A small sample of South Asians also found that the effects of racism were perpetuated in higher education where they faced difficulty both academically and socially. Overall, however, since going to university most South Asians felt further drawn towards their `cultural background', this often being their own unique view of `Asianess'. Regarding their plans after graduation, it was found that South Asians were more likely to opt for further study, believing that they needed to be better qualified than their White British counterparts. For those South Asians who were searching for work, it was noted that they were better qualified, willing to accept a lower minimum salary, had made more job applications and had started searching for work earlier than the comparable White British participants. Also, although generally they were not having difficulty in obtaining interviews, South Asian applicants were less likely to receive an offer of employment. In the final analysis examining their future plans, it was found that a large proportion of South Asian graduates were aspiring towards self employment.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.
Resumo:
Once the factory worker was considered to be a necessary evil, soon to be replaced by robotics and automation. Today, many manufacturers appreciate that people in direct productive roles can provide important flexibility and responsiveness, and so significantly contribute to business success. The challenge is no longer to design people out of the factory, but to design factory environment that help to get the best performance from people. This paper describes research that has set out to help to achieve this by expanding the capabilities of simulation modeling tools currently used by practitioners.