884 resultados para Problem analysis
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.
Resumo:
The organic matter in five oil shales (three from the Kimmeridge Clay sequence, one from the Oxford Clay sequence and one from the Julia Creek deposits in Australia) has been isolated by acid demineralisation, separated into kerogens and bitumens by solvent extraction and then characterised in some detail by chromatographic, spectroscopic and degradative techniques. Kerogens cannot be characterised as easily as bitumens because of their insolubility, and hence before any detailed molecular information can be obtained from them they must be degraded into lower molecular weight, more soluble components. Unfortunately, the determination of kerogen structures has all too often involved degradations that were far too harsh and which lead to destruction of much of the structural information. For this reason a number of milder more selective degradative procedures have been tested and used to probe the structure of kerogens. These are: 1. Lithium aluminium hydride reduction. - This procedure is commonly used to remove pyrite from kerogens and it may also increase their solubility by reduction of labile functional groups. Although reduction of the kerogens was confirmed, increases in solubility were correlated with pyrite content and not kerogen reduction. 2. O-methylation in the presence of a phase transfer catalyst. - By the removal of hydrogen bond interactions via O-methylation, it was possible to determine the contribution of such secondary interactions to the insolubility of the kerogens. Problems were encountered with the use of the phase transfer catalyst. 3. Stepwise alkaline potassium permanganate oxidation. - Significant kerogen dissolution was achieved using this procedure but uncontrolled oxidation of initial oxidation products proved to be a problem. A comparison with the peroxytrifluoroaceticacid oxidation of these kerogens was made. 4. Peroxytrifluoroacetic acid oxidation. - This was used because it preferentially degrades aromatic rings whilst leaving any benzylic positions intact. Considerable conversion of the kerogens into soluble products was achieved with this procedure. At all stages of degradation the products were fully characterised where possible using a variety of techniques including elemental analysis, solution state 1H and 13C nuclear magnetic resonance, solid state 13C nuclear magnetic resonance, gel-permeationchromatography, gas chromatography-mass spectroscopy, fourier transform infra-red spectroscopy and some ultra violet-visible spectroscopy.
Resumo:
The initial aim of this research was to investigate the application of expert Systems, or Knowledge Base Systems technology to the automated synthesis of Hazard and Operability Studies. Due to the generic nature of Fault Analysis problems and the way in which Knowledge Base Systems work, this goal has evolved into a consideration of automated support for Fault Analysis in general, covering HAZOP, Fault Tree Analysis, FMEA and Fault Diagnosis in the Process Industries. This thesis described a proposed architecture for such an Expert System. The purpose of the System is to produce a descriptive model of faults and fault propagation from a description of the physical structure of the plant. From these descriptive models, the desired Fault Analysis may be produced. The way in which this is done reflects the complexity of the problem which, in principle, encompasses the whole of the discipline of Process Engineering. An attempt is made to incorporate the perceived method that an expert uses to solve the problem; keywords, heuristics and guidelines from techniques such as HAZOP and Fault Tree Synthesis are used. In a truly Expert System, the performance of the system is strongly dependent on the high quality of the knowledge that is incorporated. This expert knowledge takes the form of heuristics or rules of thumb which are used in problem solving. This research has shown that, for the application of fault analysis heuristics, it is necessary to have a representation of the details of fault propagation within a process. This helps to ensure the robustness of the system - a gradual rather than abrupt degradation at the boundaries of the domain knowledge.
Resumo:
Simplification of texts has traditionally been carried out by replacing words and structures with appropriate semantic equivalents in the learner's interlanguage, omitting whichever items prove intractable, and thereby bringing the language of the original within the scope of the learner's transitional linguistic competence. This kind of simplification focuses mainly on the formal features of language. The simplifier can, on the other hand, concentrate on making explicit the propositional content and its presentation in the original in order to bring what is communicated in the original within the scope of the learner's transitional communicative competence. In this case, simplification focuses on the communicative function of the language. Up to now, however, approaches to the problem of simplification have been mainly concerned with the first kind, using the simplifier’s intuition as to what constitutes difficulty for the learner. There appear to be few objective principles underlying this process. The main aim of this study is to investigate the effect of simplification on the communicative aspects of narrative texts, which includes the manner in which narrative units at higher levels of organisation are structured and presented and also the temporal and logical relationships between lower level structures such as sentences/clauses, with the intention of establishing an objective approach to the problem of simplification based on a set of principled procedures which could be used as a guideline in the simplification of material for foreign students at an advanced level.
Resumo:
Personal selling and sales management play a critical role in the short and long term success of the firm, and have thus received substantial academic interest since the 1970s. Sales research has examined the role of the sales manager in some depth, defining a number of key technical and interpersonal roles which sales managers have in influencing sales force effectiveness. However, one aspect of sales management which appears to remain unexplored is that of their resolution of salesperson-related problems. This study represents the first attempt to address this gap by reporting on the conceptual and empirical development of an instrument designed to measure sales managers' problem resolution styles. A comprehensive literature review and qualitative research study identified three key constructs relating to sales managers' problem resolution styles. The three constructs identified were termed; sales manager willingness to respond, sales manager caring, and sales manager aggressiveness. Building on this, existing literature was used to develop a conceptual model of salesperson-specific consequences of the three problem resolution style constructs. The quantitative phase of the study consisted of a mail survey of UK salespeople, achieving a total sample of 140 fully usable responses. Rigorous statistical assessment of the sales manager problem resolution style measures was undertaken, and construct validity examined. Following this, the conceptual model was tested using latent variable path analysis. The results for the model were encouraging overall, and also with regard to the individual hypotheses. Sales manager problem resolution styles were found individually to have significant impacts on the salesperson-specific variables of role ambiguity, emotional exhaustion, job satisfaction, organisational commitment and organisational citizenship behaviours. The findings, theoretical and managerial implications, limitations and directions for future research are discussed.
Resumo:
The topic of my research is consumer brand equity (CBE). My thesis is that the success or otherwise of a brand is better viewed from the consumers’ perspective. I specifically focus on consumers as a unique group of stakeholders whose involvement with brands is crucial to the overall success of branding strategy. To this end, this research examines the constellation of ideas on brand equity that have hitherto been offered by various scholars. Through a systematic integration of the concepts and practices identified but these scholars (concepts and practices such as: competitiveness, consumer searching, consumer behaviour, brand image, brand relevance, consumer perceived value, etc.), this research identifies CBE as a construct that is shaped, directed and made valuable by the beliefs, attitudes and the subjective preferences of consumers. This is done by examining the criteria on the basis of which the consumers evaluate brands and make brand purchase decisions. Understanding the criteria by which consumers evaluate brands is crucial for several reasons. First, as the basis upon which consumers select brands changes with consumption norms and technology, understanding the consumer choice process will help in formulating branding strategy. Secondly, an understanding of these criteria will help in formulating a creative and innovative agenda for ‘new brand’ propositions. Thirdly, it will also influence firms’ ability to simulate and mould the plasticity of demand for existing brands. In examining these three issues, this thesis presents a comprehensive account of CBE. This is because the first issue raised in the preceding paragraph deals with the content of CBE. The second issue addresses the problem of how to develop a reliable and valid measuring instrument for CBE. The third issue examines the structural and statistical relationships between the factors of CBE and the consequences of CBE on consumer perceived value (CPV). Using LISREL-SIMPLIS 8.30, the study finds direct and significant influential links between consumer brand equity and consumer value perception.
Resumo:
The thesis investigates the properties of two trends or time series which formed a:part of the Co-Citation bibliometric model "X~Ray Crystallography and Protein Determination in 1978, 1980 and 1982". This model was one of several created for the 1983 ABRC Science Policy Study which aimed to test the utility of bibliometric models in a national science policy context. The outcome of the validation part of that study proved to be especially favourable concerning the utility of trend data, which purport to model the development of speciality areas in science over time. This assessment could have important implications for the use of such data in policy formulation. However one possible problem with the Science Policy Study's conclusions was that insufficient time was available in the study for an in-depth analysis of the data. The thesis aims to continue the validation begun in the ABRC study by providing a detailed.examination of the characteristics of the data contained in the Trends numbered 11 and 44 in the model. A novel methodology for the analysis of the properties of the trends with respect to their literature content is presented. This is followed by an assessment based on questionnaire and interview data, of the ability of Trend 44 to realistically model the historical development of the field of mobile genetic elements research over time, with respect to its scientific content and the activities of its community of researchers. The results of these various analyses are then used to evaluate the strenghts and weaknesses of a trend or time series approach to the modelling of the activities of scientifiic fields. A critical evaluation of the origins of the discovered strengths and weaknesses.in the assumptions underlying the techniques used to generate trends from co-citation data is provided. Possible improvements. to the modelling techniques are discussed.
Resumo:
The purlin-sheeting system has been the subject of numerous theoretical and experimental investigations over the past 30 years, but the complexity of the problem has led to great difficulty in developing a sound and general model. The primary aim of the thesis is to investigate the failure behaviours of cold-formed zed and channel sections for use in purlin-sheeting systems. Both the energy method and finite strip method are used to develop an approach to investigate cold-formed zed and channel section beams with partial-lateral restraint from the metal sheeting when subjected to a uniformly distributed transverse load. The stress analysis of cold-formed zed and channel section beams with partially-lateral restraint from the metal sheeting when subjected to a uniformly distributed transverse load is investigated firstly by using the analytical model based on the energy method in which the restraint actions of the sheeting are modelled by using two springs representing the translational and rotational restraints. The numerical results have showed that the two springs have significantly different influences on the stresses of the beams. The influence of the two springs has also been found to depend on the anti-sag bar and the position of the loading line. A novel method is presented for analysing the elastic local buckling behaviour of cold-formed zed and channel section beams with partial-lateral restraint from metal sheeting when subjected to a uniformly distributed transverse load, which is carried out by inputting the cross sectional stresses with the largest compressive stress into the finite strip analysis. By using the presented novel method, individual influences of warning stress, partially lateral restraints from the sheeting and the dimensions of the cross section and position of the loading line on the buckling behaviour are investigated.
Resumo:
A re-examination of fundamental concepts and a formal structuring of the waveform analysis problem is presented in Part I. eg. the nature of frequency is examined and a novel alternative to the classical methods of detection proposed and implemented which has the advantage of speed and independence from amplitude. Waveform analysis provides the link between Parts I and II. Part II is devoted to Human Factors and the Adaptive Task Technique. The Historical, Technical and Intellectual development of the technique is traced in a review which examines the evidence of its advantages relative to non-adaptive fixed task methods of training, skill assessment and man-machine optimisation. A second review examines research evidence on the effect of vibration on manual control ability. Findings are presented in terms of percentage increment or decrement in performance relative to performance without vibration in the range 0-0.6Rms'g'. Primary task performance was found to vary by as much as 90% between tasks at the same Rms'g'. Differences in task difficulty accounted for this difference. Within tasks vibration-added-difficulty accounted for the effects of vibration intensity. Secondary tasks were found to be largely insensitive to vibration except secondaries which involved fine manual adjustment of minor controls. Three experiments are reported next in which an adaptive technique was used to measure the % task difficulty added by vertical random and sinusoidal vibration to a 'Critical Compensatory Tracking task. At vibration intensities between 0 - 0.09 Rms 'g' it was found that random vibration added (24.5 x Rms'g')/7.4 x 100% to the difficulty of the control task. An equivalence relationship between Random and Sinusoidal vibration effects was established based upon added task difficulty. Waveform Analyses which were applied to the experimental data served to validate Phase Plane analysis and uncovered the development of a control and possibly a vibration isolation strategy. The submission ends with an appraisal of subjects mentioned in the thesis title.
Resumo:
A method for the exact solution of the Bragg-difrraction problem for a photorefractive grating in sillenite crystals based on Pauli matrices is proposed. For the two main optical configurations explicit analytical expressions are found for the diffraction efficiency and the polarization of the scattered wave. The exact solution is applied to a detailed analysis of a number of particular cases. For the known limiting cases there is agreement with the published results.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The number of fatal accidents in the agricultural, horticultural and forestry industry in Great Britain has declined from an annual rate of about 135 in the 1960's to its current level of about 50. Changes to the size and makeup of the population at risk mean that there has been no real improvement in fatal injury incidence rates for farmers. The Health and Safety Executives' (HSE) current system of accident investigation, recording, and analysis is directed primarily at identifying fault, allocating blame, and punishing wrongdoers. Relatively little information is recorded about the personal and organisational factors that contributed to, or failed to prevent accidents. To develop effective preventive strategies, it is important to establish whether errors by the victims and others, occur at the skills, rules, or knowledge level of functioning: are violations of some rule or procedure; or stem from failures to correctly appraise, or control a hazard. A modified version of the Hale and Glendon accident causation model was used to study 230 fatal accidents. Inspectors' original reports were examined and expert judgement applied to identify and categorise the errors committed by each of the parties involved. The highest proportion of errors that led directly to accidents occurred whilst the victims were operating at the knowledge level. The mix and proportion of errors varied considerably between different classes of victim and kind of accident. Different preventive strategies will be needed to address the problem areas identified.
Resumo:
In this Thesis, details of a proposed method for the elastic-plastic failure load analysis of complete building structures are given. In order to handle the problem, a computer programme in Atlas Autocode is produced. The structures consist of a number of parallel shear walls and intermediate frames connected by floor slabs. The results of an experimental investigation are given to verify the theoretical results and to demonstrate various factors that may influence the behaviour of these structures. Large full scale practical structures are also analysed by the proposed method and suggestions are made for achieving design economy as well as for extending research in various aspects of this field. The existing programme for elastic-plastic analysis of large frames is modified to allow for the effect of composite action of structural members, i.e. reinforced concrete floor slabs and the supporting steel beams. This modified programme is used to analyse some framed type structures with composite action as well as those which incorporate plates and shear walls. The results obtained are studied to ascertain the influence of composite action and other factors on the load carrying capacity of both bare frames and complete building structures. The theoretical failure load presented in this thesis does not predict the overall failure load of the structure nor does it predict the partial failure load of the shear walls and slabs but it merely predicts the partial failure load of a single frame and assumes that the loss of stiffess of such a frame renders the overall structure unusable. For most structures the analysis proposed in this thesis is likely to break down prematurely due to the failure of the slab and shear wall system and this factor must be taken into account in any future work on such structures. The experimental work reported in this thesis is acknowledged to be unsatisfactory as a verification of the limited theory proposed. In particular perspex was not found to be a suitable material for testing at high loads, micro-concrete may be more suitable.