21 resultados para System-analysis
em Aston University Research Archive
Resumo:
The absence of a definitive approach to the design of manufacturing systems signifies the importance of a control mechanism to ensure the timely application of relevant design techniques. To provide effective control, design development needs to be continually assessed in relation to the required system performance, which can only be achieved analytically through computer simulation. The technique providing the only method of accurately replicating the highly complex and dynamic interrelationships inherent within manufacturing facilities and realistically predicting system behaviour. Owing to the unique capabilities of computer simulation, its application should support and encourage a thorough investigation of all alternative designs. Allowing attention to focus specifically on critical design areas and enabling continuous assessment of system evolution. To achieve this system analysis needs to efficient, in terms of data requirements and both speed and accuracy of evaluation. To provide an effective control mechanism a hierarchical or multi-level modelling procedure has therefore been developed, specifying the appropriate degree of evaluation support necessary at each phase of design. An underlying assumption of the proposal being that evaluation is quick, easy and allows models to expand in line with design developments. However, current approaches to computer simulation are totally inappropriate to support the hierarchical evaluation. Implementation of computer simulation through traditional approaches is typically characterized by a requirement for very specialist expertise, a lengthy model development phase, and a correspondingly high expenditure. Resulting in very little and rather inappropriate use of the technique. Simulation, when used, is generally only applied to check or verify a final design proposal. Rarely is the full potential of computer simulation utilized to aid, support or complement the manufacturing system design procedure. To implement the proposed modelling procedure therefore the concept of a generic simulator was adopted, as such systems require no specialist expertise, instead facilitating quick and easy model creation, execution and modification, through simple data inputs. Previously generic simulators have tended to be too restricted, lacking the necessary flexibility to be generally applicable to manufacturing systems. Development of the ATOMS manufacturing simulator, however, has proven that such systems can be relevant to a wide range of applications, besides verifying the benefits of multi-level modelling.
Resumo:
This study investigates the use of Pyroformer intermediate pyrolysis system to produce alternative diesel engines fuels (pyrolysis oil) from various biomass and waste feedstocks and the application of these pyrolysis oils in a diesel engine generating system for Combined Heat and Power (CHP) production. The pyrolysis oils were produced in a pilot-scale (20 kg/h) intermediate pyrolysis system. Comprehensive characterisations, with a view to use as engine fuels, were carried out on the sewage sludge and de-inking sludge derived pyrolysis oils. They were both found to be able to provide sufficient heat for fuelling a diesel engine. The pyrolysis oils also presented poor combustibility and high carbon deposition, but these problems could be mitigated by means of blending the pyrolysis oils with biodiesel (derived from waste cooking oil). The blends of SSPO (sewage sludge pyrolysis oil) and biodiesel (30/70 and 50/50 in volumetric ratios) were tested in a 15 kWe Lister type stationary generating system for up to 10 hours. There was no apparent deterioration observed in engine operation. With 30% SSPO blended into biodiesel, the engine presents better overall performance (electric efficiency), fuel consumption, and overall exhaust emissions than with 50% SSPO blend. An overall system analysis was carried out on a proposed integrated Pyroformer-CHP system. Combined with real experimental results, this was used for evaluating the costs for producing heat and power and char from wood pellets and sewage sludge. It is concluded that the overall system efficiencies for both types of plant can be over 40%; however the integrated CHP system is not economically viable. This is due to extraordinary project capital investment required.
Resumo:
Aim: Identify the incidence of vitreomacular traction (VMT) and frequency of reduced vision in the absence of other coexisting macular pathology using a pragmatic classification system for VMT in a population of patients referred to the hospital eye service. Methods: A detailed survey of consecutive optical coherence tomography (OCT) scans was done in a high-throughput ocular imaging service to ascertain cases of vitreomacular adhesion (VMA) and VMT using a departmental classification system. Analysis was done on the stages of traction, visual acuity, and association with other macular conditions. Results: In total, 4384 OCT scan episodes of 2223 patients were performed. Two hundred and fourteen eyes had VMA/VMT, with 112 eyes having coexisting macular pathology. Of 102 patients without coexisting pathology, 57 patients had VMT grade between 2 and 8, with a negative correlation between VMT grade and number of Snellen lines (r= -0.61717). There was a distinct cutoff in visual function when VMT grade was higher than 4 with the presence of cysts and sub retinal separation and breaks in the retinal layers. Conclusions: VMT is a common encounter often associated with other coexisting macular pathology. We estimated an incidence rate of 0.01% of VMT cases with reduced vision and without coexisting macular pathology that may potentially benefit from intervention. Grading of VMT to select eyes with cyst formation as well as hole formation may be useful for targeting patients who are at higher risk of visual loss from VMT.
Resumo:
Recent functional magnetic resonance imaging (fMRI) investigations of the interaction between cognition and reward processing have found that the lateral prefrontal cortex (PFC) areas are preferentially activated to both increasing cognitive demand and reward level. Conversely, ventromedial PFC (VMPFC) areas show decreased activation to the same conditions, indicating a possible reciprocal relationship between cognitive and emotional processing regions. We report an fMRI study of a rewarded working memory task, in which we further explore how the relationship between reward and cognitive processing is mediated. We not only assess the integrity of reciprocal neural connections between the lateral PFC and VMPFC brain regions in different experimental contexts but also test whether additional cortical and subcortical regions influence this relationship. Psychophysiological interaction analyses were used as a measure of functional connectivity in order to characterize the influence of both cognitive and motivational variables on connectivity between the lateral PFC and the VMPFC. Psychophysiological interactions revealed negative functional connectivity between the lateral PFC and the VMPFC in the context of high memory load, and high memory load in tandem with a highly motivating context, but not in the context of reward alone. Physiophysiological interactions further indicated that the dorsal anterior cingulate and the caudate nucleus modulate this pathway. These findings provide evidence for a dynamic interplay between lateral PFC and VMPFC regions and are consistent with an emotional gating role for the VMPFC during cognitively demanding tasks. Our findings also support neuropsychological theories of mood disorders, which have long emphasized a dysfunctional relationship between emotion/motivational and cognitive processes in depression.
Resumo:
Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.
Resumo:
The present work describes the development of a proton induced X-ray emission (PIXE) analysis system, especially designed and builtfor routine quantitative multi-elemental analysis of a large number of samples. The historical and general developments of the analytical technique and the physical processes involved are discussed. The philosophy, design, constructional details and evaluation of a versatile vacuum chamber, an automatic multi-sample changer, an on-demand beam pulsing system and ion beam current monitoring facility are described.The system calibration using thin standard foils of Si, P, S,Cl, K, Ca, Ti, V, Fe, Cu, Ga, Ge, Rb, Y and Mo was undertaken at proton beam energies of 1 to 3 MeV in steps of 0.5 MeV energy and compared with theoretical calculations. An independent calibration check using bovine liver Standard Reference Material was performed. The minimum detectable limits have been experimentally determined at detector positions of 90° and 135° with respect to the incident beam for the above range of proton energies as a function of atomic number Z. The system has detection limits of typically 10- 7 to 10- 9 g for elements 14
Resumo:
This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
This thesis describes work undertaken in order to fulfil a need experienced in the Department of Educational Enquiry at the University of Aston in Birmingham for speech analysis facilities suitable for use in teaching and research work within the Department. The hardware and software developed during the research project provides displays of speech fundamental frequency and intensity in real time. The system is suitable for the provision of visual feedback of these parameters of a subject's speech in a learning situation, and overcomes the inadequacies of equipment currently used for this task in that it provides a clear indication of fundamental frequency contours as the subject is speaking. The thesis considers the use of such equipment in several related fields, and the approaches that have been reported to one of the major problems of speech analysis, namely pitch-period estimation. A number of different systems are described, and their suitability for the present purposes is discussed. Finally, a novel method of pitch-period estimation is developed, and a speech analysis system incorporating this method is described. Comparison is made between the results produced by this system and those produced by a conventional speech spectrograph.
Resumo:
Ignorance of user factors can be seen as one of the nontechnical issues contributing to expert system failure. An expert advisory system is built for nonexpert users; the users' acceptance is a very important factor for its successful implementation. If an expert advisory system satisfactorily represents the expertise in the domain, there still remains the question: "Will the end-users use the system?" This paper aims to address users' issues by analysing their reactions towards an expert advisory system called ADGAME, developed to help its users make better decisions in playing a competitive business game. Two experiments with ADGAME have been carried out. The research results show that, when the use of the expert advisory system is optional, there is considerable reluctance to use it, particularly amongst the "worst" potential users. Users also doubt the potential benefits in terms of improved learning and confidence in decisions made. Strangely, the one positive expectation that users had, that the system would save them time, proved not to be the case in practice; ADGAME appears to improve the users' effectiveness rather than their efficiency. © 1995.
Resumo:
The human accommodation system has been extensively examined for over a century, with a particular focus on trying to understand the mechanisms that lead to the loss of accommodative ability with age (Presbyopia). The accommodative process, along with the potential causes of presbyopia, are disputed; hindering efforts to develop methods of restoring accommodation in the presbyopic eye. One method that can be used to provide insight into this complex area is Finite Element Analysis (FEA). The effectiveness of FEA in modelling the accommodative process has been illustrated by a number of accommodative FEA models developed to date. However, there have been limitations to these previous models; principally due to the variation in data on the geometry of the accommodative components, combined with sparse measurements of their material properties. Despite advances in available data, continued oversimplification has occurred in the modelling of the crystalline lens structure and the zonular fibres that surround the lens. A new accommodation model was proposed by the author that aims to eliminate these limitations. A novel representation of the zonular structure was developed, combined with updated lens and capsule modelling methods. The model has been designed to be adaptable so that a range of different age accommodation systems can be modelled, allowing the age related changes that occur to be simulated. The new modelling methods were validated by comparing the changes induced within the model to available in vivo data, leading to the definition of three different age models. These were used in an extended sensitivity study on age related changes, where individual parameters were altered to investigate their effect on the accommodative process. The material properties were found to have the largest impact on the decline in accommodative ability, in particular compared to changes in ciliary body movement or zonular structure. Novel data on the importance of the capsule stiffness and thickness was also established. The new model detailed within this thesis provides further insight into the accommodation mechanism, as well as a foundation for future, more detailed investigations into accommodation, presbyopia and accommodative restoration techniques.