31 resultados para the choice of material
em Aston University Research Archive
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.
Resumo:
We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.
Resumo:
We analyse the matrix momentum algorithm, which provides an efficient approximation to on-line Newton's method, by extending a recent statistical mechanics framework to include second order algorithms. We study the efficacy of this method when the Hessian is available and also consider a practical implementation which uses a single example estimate of the Hessian. The method is shown to provide excellent asymptotic performance, although the single example implementation is sensitive to the choice of training parameters. We conjecture that matrix momentum could provide efficient matrix inversion for other second order algorithms.
Resumo:
The detection of signals in the presence of noise is one of the most basic and important problems encountered by communication engineers. Although the literature abounds with analyses of communications in Gaussian noise, relatively little work has appeared dealing with communications in non-Gaussian noise. In this thesis several digital communication systems disturbed by non-Gaussian noise are analysed. The thesis is divided into two main parts. In the first part, a filtered-Poisson impulse noise model is utilized to calulate error probability characteristics of a linear receiver operating in additive impulsive noise. Firstly the effect that non-Gaussian interference has on the performance of a receiver that has been optimized for Gaussian noise is determined. The factors affecting the choice of modulation scheme so as to minimize the deterimental effects of non-Gaussian noise are then discussed. In the second part, a new theoretical model of impulsive noise that fits well with the observed statistics of noise in radio channels below 100 MHz has been developed. This empirical noise model is applied to the detection of known signals in the presence of noise to determine the optimal receiver structure. The performance of such a detector has been assessed and is found to depend on the signal shape, the time-bandwidth product, as well as the signal-to-noise ratio. The optimal signal to minimize the probability of error of; the detector is determined. Attention is then turned to the problem of threshold detection. Detector structure, large sample performance and robustness against errors in the detector parameters are examined. Finally, estimators of such parameters as. the occurrence of an impulse and the parameters in an empirical noise model are developed for the case of an adaptive system with slowly varying conditions.
Resumo:
Simplification of texts has traditionally been carried out by replacing words and structures with appropriate semantic equivalents in the learner's interlanguage, omitting whichever items prove intractable, and thereby bringing the language of the original within the scope of the learner's transitional linguistic competence. This kind of simplification focuses mainly on the formal features of language. The simplifier can, on the other hand, concentrate on making explicit the propositional content and its presentation in the original in order to bring what is communicated in the original within the scope of the learner's transitional communicative competence. In this case, simplification focuses on the communicative function of the language. Up to now, however, approaches to the problem of simplification have been mainly concerned with the first kind, using the simplifier’s intuition as to what constitutes difficulty for the learner. There appear to be few objective principles underlying this process. The main aim of this study is to investigate the effect of simplification on the communicative aspects of narrative texts, which includes the manner in which narrative units at higher levels of organisation are structured and presented and also the temporal and logical relationships between lower level structures such as sentences/clauses, with the intention of establishing an objective approach to the problem of simplification based on a set of principled procedures which could be used as a guideline in the simplification of material for foreign students at an advanced level.
Resumo:
This thesis describes a project which has investigated the evaluation of information systems. The work took place in, and is related to, a specific organisational context, that of the National Health Service (NHS). It aims to increase understanding of the evaluation which takes place in the service and the way in which this is affected by the NHS environment. It also investigates the issues which surround some important types of evaluation and their use in this context. The first stage of the project was a postal survey in which respondents were asked to describe the evaluation which took place in their authorities and to give their opinions about it. This was used to give an overview of the practice of IS evaluation in the NHS and to identify its uses and the problems experienced. Three important types of evaluation were then examined in more detail by means of action research studies. One of these dealt with the selection and purchase of a large hospital information system. The study took the form of an evaluation of the procurement process, and examined the methods used and the influence of organisational factors. The other studies are concerned with post-implementation evaluation, and examine the choice of an evaluation approach as well as its application. One was an evaluation of a community health system which had been operational for some time but was of doubtful value, and suffered from a number of problems. The situation was explored by means of a study of the costs and benefits of the system. The remaining study was the initial review of a system which was used in the administration of a Breast Screening Service. The service itself was also newly operational and the relationship between the service and the system was of interest.
Resumo:
The thesis is concerned with cross-cultural distance learning in two countries: Great Britain and France. Taking the example of in-house sales training, it argues that it is possible to develop courses for use in two or more countries of differing culture and language. Two courses were developed by the researcher. Both were essentially print-based distance-learning courses designed to help salespeople achieve a better understanding of their customers. One used a quantitative, the other qualitative approach. One considered the concept of the return on investment and the other, for which a video support was also developed, considered the analysis of a customer's needs. Part 1 of the thesis considers differences in the training context between France and Britain followed by a review of the learning process with reference to distance learning. Part 2 looks at the choice of training medium course design and evaluation and sets out the methodology adopted, including problems encountered in this type of fieldwork. Part 3 analyses the data and draws conclusions from the findings, before offering a series of guidelines for those concerned with the development of cross-cultural in-house training courses. The results of the field tests on the two courses were analysed in relation to the socio-cultural, educational and experiential background of the learners as well as their preferred learning styles. The thesis argues that it is possible to develop effective in-house sales training courses to be used in two cultures and identifies key considerations which need to be taken into account when carrying out this type of work.
Resumo:
This research is concerned with the application of operational research techniques in the development of a long- term waste management policy by an English waste disposal authority. The main aspects which have been considered are the estimation of future waste production and the assessment of the effects of proposed systems. Only household and commercial wastes have been dealt with in detail, though suggestions are made for the extension of the effect assessment to cover industrial and other wastes. Similarly, the only effects considered in detail have been costs, but possible extensions are discussed. An important feature of the study is that it was conducted in close collaboration with a waste disposal authority, and so pays more attention to the actual needs of the authority than is usual in such research. A critical examination of previous waste forecasting work leads to the use of simple trend extrapolation methods, with some consideration of seasonal effects. The possibility of relating waste production to other social and economic indicators is discussed. It is concluded that, at present, large uncertainties in predictions are inevitable; waste management systems must therefore be designed to cope with this uncertainty. Linear programming is used to assess the overall costs of proposals. Two alternative linear programming formulations of this problem are used and discussed. The first is a straightforward approach, which has been .implemented as an interactive computer program. The second is more sophisticated and represents the behaviour of incineration plants more realistically. Careful attention is paid to the choice of appropriate data and the interpretation of the results. Recommendations are made on methods for immediate use, on the choice of data to be collected for future plans, and on the most useful lines for further research and development.
Resumo:
Following Andersen's (1986, 1991) study of untutored anglophone learners of Spanish, aspectual features have been at the centre of hypotheses on the development of past verbal morphology in language acquisition. The Primacy of Aspect Hypothesis claims that the association of any verb category (Aktionsart) with any aspect (perfective or imperfective) constitutes the endpoint of acquisition. However, its predictions rely on the observation of a limited number of untutored learners at the early stages of their acquisition, and have yet to be confirmed in other settings. The aim of the present thesis is to evaluate the explanatory power of the PAH in respect of the acquisition of French past tenses, an aspect of the language which constitutes a serious stumbling block for foreign learners, even those at the highest levels of proficiency (Coppieters 1987). The present research applies the PAH to the production of 61 anglophone 'advanced learners' (as defined in Bartning 1997) in a tutored environment. In so doing, it tests concurrent explanations, including the influence of the input, the influence of chunking, and the hypothesis of cyclic development. Finally, it discusses the cotextual and contextual factors that still provoke what Anderson (1991) terms "non-native glitches" at the final stage, as predicted by the PAH. The first part of the thesis provides the theoretical background to the corpus analysis. It opens with a diachronic presentation of the French past tense system focusing on present areas of competition and developments that emphasize the complexity of the system to be acquired. The concepts of time, grammatical aspect and lexical aspect (Aktionsart) are introduced and discussed in the second chapter, and a distinctive formal representation of the French past tenses is offered in the third chapter. The second part of the thesis is devoted to a corpus analysis. The data gathering procedures and the choice of tasks (oral and written film narratives based on Modern Times, cloze tests and acceptability judgement tests) are described and justified in the research methodology chapter. The research design was shaped by previous studies and consequently allows comparison with these. The second chapter is devoted to the narratives analysis and the third to the grammatical tasks. This section closes with a summary of discoveries and a comparison with previous results. The conclusion addresses the initial research questions in the light of both theory and practice. It shows that the PAH fails to account for the complex phenomenon of past tense development in the acquisitional settings under study, as it adopts a local (the verb phrase) and linear (steady progression towards native usage) approach. It is thus suggested that past tense acquisition rather follows a pendular development as learners reformulate their learning hypotheses and become increasingly able to shift from local to global cues and so to integrate the influence of cotext and context in their tense choice.
Resumo:
This thesis reviews the main methodological developments in public sector investment appraisal and finds growing evidence that appraisal techniques are not fulfilling their earlier promise. It is suggested that an important reason for this failure lies in the inability of these techniques to handle uncertainty except in a highly circumscribed fashion. It is argued that a more fruitful approach is to strive for flexibility. Investment projects should be formulated with a view to making them responsive to a wide range of possible future events, rather than embodying a solution which is optimal for one configuration of circumstances only. The distinction drawn in economics between the short and the long run is used to examine the nature of flexibility. The concept of long run flexibility is applied to the pre-investment range of choice open to the decisionmaker. It is demonstrated that flexibility is reduced at a very early stage of decisionmaking by the conventional system of appraisal which evaluates only a small number of options. The pre-appraisal filtering process is considered further in relation to decisionmaking models. It is argued that for public sector projects the narrowing down of options is best understood in relation to an amended mixed scanning model which places importance on the process by which the 'national interest ' is determined. Short run flexibility deals with operational characteristics, the degree to which particular projects may respond to changing demands when the basic investment is already in place. The tension between flexibility and cost is noted. A short case study on the choice of electricity generating plant is presented. The thesis concludes with a brief examination of the approaches used by successive British governments to public sector investment, particularly in relation to the nationalised industries
Resumo:
In this work we propose the hypothesis that replacing the current system of representing the chemical entities known as amino acids using Latin letters with one of several possible alternative symbolic representations will bring significant benefits to the human construction, modification, and analysis of multiple protein sequence alignments. We propose ways in which this might be done without prescribing the choice of actual scripts used. Specifically we propose and explore three ways to encode amino acid texts using novel symbolic alphabets free from precedents. Primary orthographic encoding is the direct substitution of a new alphabet for the standard, Latin-based amino acid code. Secondary encoding imposes static residue groupings onto the orthography of the alphabet by manipulating the shape and/or orientation of amino acid symbols. Tertiary encoding renders each residue as a composite symbol; each such symbol thus representing several alternative amino acid groupings simultaneously. We also propose that the use of a new group-focussed alphabet will free the colouring of amino acid residues often used as a tool to facilitate the representation or construction of multiple alignments for other purposes, possibly to indicate dynamic properties of an alignment such as position-wise residue conservation.
Resumo:
Higher education institutions are increasingly using social software tools to support teaching and learning. Despite the fact that social software is often used in a social context, these applications can significantly contribute to the educational experience of a student. However, as the social software domain comprises a considerable diversity of tools, the respective tools can be expected to differ in the way they can contribute to teaching and learning. In this review on the educational use of social software, we systematically analyze and compare the diverse social software tools and identify their contributions to teaching and learning. By integrating established learning theory and the extant literature on the individual social software applications we seek to contribute to a theoretical foundation for social software use and the choice of tools. Case vignettes from several UK higher education institutions are used to illustrate the different applications of social software tools in teaching and learning.
Resumo:
Spray-dried materials are being used increasingly in industries such as food, detergent and pharmaceutical manufacture. Spray-dried sodium carbonate is an important product that has a great propensity to cake; its moisture-sorption properties are very different to the crystalline and amorphous species, with a great affinity for atmospheric moisture. This work demonstrates how the noncontact surface analysis of individual particles using atomic force microscopy can highlight the possible mechanisms of unwanted agglomeration. The nondestructive nature of this method allows cycling of localised humidity in situ and repeated scanning of the same particle area. The resulting topography and phase scans showed that humidity cycling caused changes in the distribution of material phases that were not solely dependent on topographical changes. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
The deliberate addition of Gaussian noise to cochlear implant signals has previously been proposed to enhance the time coding of signals by the cochlear nerve. Potentially, the addition of an inaudible level of noise could also have secondary benefits: it could lower the threshold to the information-bearing signal, and by desynchronization of nerve discharges, it could increase the level at which the information-bearing signal becomes uncomfortable. Both these effects would lead to an increased dynamic range, which might be expected to enhance speech comprehension and make the choice of cochlear implant compression parameters less critical (as with a wider dynamic range, small changes in the parameters would have less effect on loudness). The hypothesized secondary effects were investigated with eight users of the Clarion cochlear implant; the stimulation was analogue and monopolar. For presentations in noise, noise at 95% of the threshold level was applied simultaneously and independently to all the electrodes. The noise was found in two-alternative forced-choice (2AFC) experiments to decrease the threshold to sinusoidal stimuli (100 Hz, 1 kHz, 5 kHz) by about 2.0 dB and increase the dynamic range by 0.7 dB. Furthermore, in 2AFC loudness balance experiments, noise was found to decrease the loudness of moderate to intense stimuli. This suggests that loudness is partially coded by the degree of phase-locking of cochlear nerve fibers. The overall gain in dynamic range was modest, and more complex noise strategies, for example, using inhibition between the noise sources, may be required to get a clinically useful benefit. © 2006 Association for Research in Otolaryngology.