43 resultados para Analysis of teaching process
Resumo:
The objective of the thesis was to analyse several process configurations for the production of electricity from biomass. Process simulation models using AspenPlus aimed at calculating the industrial performance of power plant concepts were built, tested, and used for analysis. The criteria used in analysis were performance and cost. All of the advanced systems appear to have higher efficiencies than the commercial reference, the Rankine cycle. However, advanced systems typically have a higher cost of electricity (COE) than the Rankine power plant. High efficiencies do not reduce fuel costs enough to compensate for the high capital costs of advanced concepts. The successful reduction of capital costs would appear to be the key to the introduction of the new systems. Capital costs account for a considerable, often dominant, part of the cost of electricity in these concepts. All of the systems have higher specific investment costs than the conventional industrial alternative, i.e. the Rankine power plant; Combined beat and power production (CUP) is currently the only industrial area of application in which bio-power costs can be considerably reduced to make them competitive. Based on the results of this work, AsperiPlus is an appropriate simulation platform. How-ever, the usefulness of the models could be improved if a number of unit operations were modelled in greater detail. The dryer, gasifier, fast pyrolysis, gas engine and gas turbine models could be improved.
Resumo:
A study has been made, using High Pressure Liquid Chromatography, of the migration of TMQ (a quinoline type) and 6PPD (a paraphenylenediamine type) antidegradants from a tyre sidewall compound into adjacent casing and liner compounds containing no antidegradant. Migration takes place at a rapid rate, even during the vulcanisation of the composite. After 4000 hours ageing in nitrogen at 100oC, there is a higher level of antidegradants in the casing than in the sidewall. An equilibrium distribution is not obtained. After 114 days at 50oC in 95% relative humidity, the level of solvent extractable 6PPD fell to zero, but subsequent ageing for 2 years in 50 pphm ozone showed no evidence of sidewall cracking. It is suggested that the antidegradant is still active but linked to the polymer chain. An analytical method, for the type and amount of sulphenamide accelerators in vulcanised rubber compounds, has been developed. During the vulcanisation process, the accelerators decay, liberating specific amines which have been solvent extracted, derivatised with 1-chloro-2,4-dinitrobenzene and the yellow coloured zwitter ion analysed using High Pressure Liquid Chromatography. The decay of the accelerator and sulphur during the vulcanisation process, has been studied. It has been demonstrated that the sulphur crosslinking with a styrenebutadiene polymer is a first order reaction, after an initial period during which the accelerator content falls to zero. Variations in sulphur to accelerator ratios gave consistent rate constants for the crosslinking, except for a sulphur level of less than 1% by weight and a ratiio of accelerator to sulphur of 1:1.3. The retention time of the reaction product between sulphur and accelerator from an HPLC column changes with cure time, showing that the precurser to crosslinking is an ever changing material. One of these reaction products has been analysed.
Resumo:
The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.
Resumo:
Simplification of texts has traditionally been carried out by replacing words and structures with appropriate semantic equivalents in the learner's interlanguage, omitting whichever items prove intractable, and thereby bringing the language of the original within the scope of the learner's transitional linguistic competence. This kind of simplification focuses mainly on the formal features of language. The simplifier can, on the other hand, concentrate on making explicit the propositional content and its presentation in the original in order to bring what is communicated in the original within the scope of the learner's transitional communicative competence. In this case, simplification focuses on the communicative function of the language. Up to now, however, approaches to the problem of simplification have been mainly concerned with the first kind, using the simplifier’s intuition as to what constitutes difficulty for the learner. There appear to be few objective principles underlying this process. The main aim of this study is to investigate the effect of simplification on the communicative aspects of narrative texts, which includes the manner in which narrative units at higher levels of organisation are structured and presented and also the temporal and logical relationships between lower level structures such as sentences/clauses, with the intention of establishing an objective approach to the problem of simplification based on a set of principled procedures which could be used as a guideline in the simplification of material for foreign students at an advanced level.
Resumo:
It is widely accepted that the Thatcher years and their immediate aftermath were associated with substantive social and organizational change. The privatisation programme, 'the rolling back of the State', prosecuted by the successive Conservative Governments from 1979-1997 was a central pillar of Governmental policy. This thesis seeks to engage with privatization through the of CoastElectric, a newly privatised Regional Electricity Company. This thesis contributes to the extant understanding of the dynamics of organizational change in four major ways. Firstly, the study into CoastElectric addresses the senior management decision making within the organization: in particular, it will attempt to make sense of 'why' particular decisions were made. The theoretical backdrop to this concern will draw on the concepts of normalization, cultural capital and corporate fashion. The argument presented in this thesis is that the decision-making broadly corresponded with that which could be considered to be at the vanguard of mangerialist thought. However, a detailed analysis suggested that at different junctures in CoastElectric's history there were differences in the approach to decision making that warranted further analysis. The most notable finding was that the relative levels of new managerialist cultural capital possessed by the decision-making elite had an important bearing upon whether the decision was formulated either endogenously or exogenously, with the assistance of cultural intermediaries such as management consultants. The thesis demonstrates the importance of the broader discourse of new managerialism in terms of shaping what is considered to be a 'commonsensical, rational' strategy. The second concern of this thesis is that of the process of organizational change. The study of CoastElectric attempts to provide a rich account of the dynamics of organizational change. This is realized through, first, examining the pre-existing context of the organization; second, through analyzing the power politics of change interventions. The master concepts utilised in this endeavour are that of: dividing practices, the establishment of violent hierarchies between competing discourses; symbolic violence; critical turning points; recursiveness; creative destruction; legitimation strategies and the reconstitution of subjects in the workplace.
Resumo:
The purpose of this study is to develop econometric models to better understand the economic factors affecting inbound tourist flows from each of six origin countries that contribute to Hong Kong’s international tourism demand. To this end, we test alternative cointegration and error correction approaches to examine the economic determinants of tourist flows to Hong Kong, and to produce accurate econometric forecasts of inbound tourism demand. Our empirical findings show that permanent income is the most significant determinant of tourism demand in all models. The variables of own price, weighted substitute prices, trade volume, the share price index (as an indicator of changes in wealth in origin countries), and a dummy variable representing the Beijing incident (1989) are also found to be important determinants for some origin countries. The average long-run income and own price elasticity was measured at 2.66 and – 1.02, respectively. It was hypothesised that permanent income is a better explanatory variable of long-haul tourism demand than current income. A novel approach (grid search process) has been used to empirically derive the weights to be attached to the lagged income variable for estimating permanent income. The results indicate that permanent income, estimated with empirically determined relatively small weighting factors, was capable of producing better results than the current income variable in explaining long-haul tourism demand. This finding suggests that the use of current income in previous empirical tourism demand studies may have produced inaccurate results. The share price index, as a measure of wealth, was also found to be significant in two models. Studies of tourism demand rarely include wealth as an explanatory forecasting long-haul tourism demand. However, finding a satisfactory proxy for wealth common to different countries is problematic. This study indicates with the ECM (Error Correction Models) based on the Engle-Granger (1987) approach produce more accurate forecasts than ECM based on Pesaran and Shin (1998) and Johansen (1988, 1991, 1995) approaches for all of the long-haul markets and Japan. Overall, ECM produce better forecasts than the OLS, ARIMA and NAÏVE models, indicating the superiority of the application of a cointegration approach for tourism demand forecasting. The results show that permanent income is the most important explanatory variable for tourism demand from all countries but there are substantial variations between countries with the long-run elasticity ranging between 1.1 for the U.S. and 5.3 for U.K. Price is the next most important variable with the long-run elasticities ranging between -0.8 for Japan and -1.3 for Germany and short-run elasticities ranging between – 0.14 for Germany and -0.7 for Taiwan. The fastest growing market is Mainland China. The findings have implications for policies and strategies on investment, marketing promotion and pricing.
Resumo:
This dissertation studies the process of operations systems design within the context of the manufacturing organization. Using the DRAMA (Design Routine for Adopting Modular Assembly) model as developed by a team from the IDOM Research Unit at Aston University as a starting point, the research employed empirically based fieldwork and a survey to investigate the process of production systems design and implementation within four UK manufacturing industries: electronics assembly, electrical engineering, mechanical engineering and carpet manufacturing. The intention was to validate the basic DRAMA model as a framework for research enquiry within the electronics industry, where the initial IDOM work was conducted, and then to test its generic applicability, further developing the model where appropriate, within the other industries selected. The thesis contains a review of production systems design theory and practice prior to presenting thirteen industrial case studies of production systems design from the four industry sectors. The results and analysis of the postal survey into production systems design are then presented. The strategic decisions of manufacturing and their relationship to production systems design, and the detailed process of production systems design and operation are then discussed. These analyses are used to develop the generic model of production systems design entitled DRAMA II (Decision Rules for Analysing Manufacturing Activities). The model contains three main constituent parts: the basic DRAMA model, the extended DRAMA II model showing the imperatives and relationships within the design process, and a benchmark generic approach for the design and analysis of each component in the design process. DRAMA II is primarily intended for use by researchers as an analytical framework of enquiry, but is also seen as having application for manufacturing practitioners.
Resumo:
Orthodox contingency theory links effective organisational performance to compatible relationships between the environment and organisation strategy and structure and assumes that organisations have the capacity to adapt as the environment changes. Recent contributions to the literature on organisation theory claim that the key to effective performance is effective adaptation which in turn requires the simultaneous reconciliation of efficiency and innovation which is afforded by an unique environment-organisation configuration. The literature on organisation theory recognises the continuing confusion caused by the fragmented and often conflicting results from cross-sectional studies. Although the case is made for longitudinal studies which comprehensively describe the evolving relationship between the environment and the organisation there is little to suggest how such studies should be executed in practice. Typically the choice is between the approaches of the historicised case study and statistical analysis of large populations which examine the relationship between environment and organisation strategy and/or structure and ignore the product-process relationship. This study combines the historicised case study and the multi-variable and ordinal scale approach of statistical analysis to construct an analytical framework which tracks and exposes the environment-organisation-performance relationship over time. The framework examines changes in the environment, strategy and structure and uniquely includes an assessment of the organisation's product-process relationship and its contribution to organisational efficiency and innovation. The analytical framework is applied to examine the evolving environment-organisation relationship of two organisations in the same industry over the same twenty-five year period to provide a sector perspective of organisational adaptation. The findings demonstrate the significance of the environment-organisation configuration to the scope and frequency of adaptation and suggest that the level of sector homogeneity may be linked to the level of product-process standardisation.
Resumo:
A detailed literature survey confirmed cold roll-forming to be a complex and little understood process. In spite of its growing value, the process remains largely un-automated with few principles used in set-up of the rolling mill. This work concentrates on experimental investigations of operating conditions in order to gain a scientific understanding of the process. The operating conditions are; inter-pass distance, roll load, roll speed, horizontal roll alignment. Fifty tests have been carried out under varied operating conditions, measuring section quality and longitudinal straining to give a picture of bending. A channel section was chosen for its simplicity and compatibility with previous work. Quality measurements were measured in terms of vertical bow, twist and cross-sectional geometric accuracy, and a complete method of classifying quality has been devised. The longitudinal strain profile was recorded, by the use of strain gauges attached to the strip surface at five locations. Parameter control is shown to be important in allowing consistency in section quality. At present rolling mills are constructed with large tolerances on operating conditions. By reduction of the variability in parameters, section consistency is maintained and mill down-time is reduced. Roll load, alignment and differential roll speed are all shown to affect quality, and can be used to control quality. Set-up time is reduced by improving the design of the mill so that parameter values can be measured and set, without the need for judgment by eye. Values of parameters can be guided by models of the process, although elements of experience are still unavoidable. Despite increased parameter control, section quality is variable, if only due to variability in strip material properties. Parameters must therefore be changed during rolling. Ideally this can take place by closed-loop feedback control. Future work lies in overcoming the problems connected with this control.
Resumo:
The thesis examines Kuhn's (1962, 1970) concept of paradigm, assesses how it is employed for mapping intellectual terrain in the social sciences, and evaluates it's use in research based on multiple theory positions. In so doing it rejects both the theses of total paradigm 'incommensurability' (Kuhn, 1962), and also of liberal 'translation' (Popper, 1970), in favour of a middle ground through the 'language-game of everyday life' (Wittgenstein, 1953). The thesis ultimately argues for the possibility of being 'trained-into' new paradigms, given the premise that 'unorganised experience cannot order perception' (Phillips, 1977). In conducting multiple paradigm research the analysis uses the Burrell and Morgan (1979) model for examining the work organisation of a large provincial fire Service. This analysis accounts for firstly, a 'functionalist' assessment of work design, demonstrating inter alia the decrease in reported motivation with length of service; secondly, an 'interpretive' portrayal of the daily accomplishment of task routines, highlighting the discretionary and negotiated nature of the day's events; thirdly, a 'radical humanist' analysis of workplace ideology, demonstrating the hegemonic role of officer training practices; and finally, a 'radical structuralist' description of the labour process, focusing on the establishment of a 'normal working day'. Although the argument is made for the possibility of conducting multiple paradigm research, the conclusion stresses the many institutional pressures serving to offset development.
Resumo:
This report describes the practice of teamwork as expressed in case conferences for care of the elderly and evaluates the effectiveness of case conferences in their contribution to care. The study involved the observation of more than two hundred case conferences in sixteen locations throughout the West Midlands, in which one thousand seven hundred and three participants were involved. Related investigation of service outcomes involved an additional ninety six patients who were interviewed in their homes. The pu`pose of the study was to determine whether the practice of teamwork and decision-making in case conferences is a productive and cost effective method of working. Preliminary exploration revealed the extent to which the team approach is part of the organisational culture and which, it is asserted, serves to perpetuate the mythical value of team working. The study has demonstrated an active subscription to the case conference approach, yet has revealed many weaknesses, not least of which is clear evidence that certain team members are inhibited in their contribution. Further, that the decisional process in case conferences has little consequence to care outcome. Where outcomes are examined there is evidence of service inadequacy. This work presents a challenge to professionals to confront their working practices with honesty and with vision, in the quest for the best and most cost effective service to patients.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
A key feature of ‘TESOL Quarterly’, a leading journal in the world of TESOL/applied linguistics, is its ‘Forum’ section which invites ‘responses and rebuttals’ from readers to any of its articles. These ‘responses or rebuttals’ form the focus of this research. In the interchanges between readers reacting to earlier research articles in TESOL Quarterly and authors responding to the said reaction I – examine the texts for evidence of genre-driven structure, whether shared between both ‘reaction’ and ‘response’ sections, or peculiar to each section, and attempt to determine the precise nature of the intended communicative purpose in particular and the implications for academic debate in general. The intended contribution of this thesis is to provide an analysis of how authors of research articles and their critics pursue their efforts beyond the research article which precipitated these exchanges in order to be recognized by their discourse community as, in the terminology of Swales (1981:51), ‘Primary Knowers’. Awareness of any principled generic process identified in this thesis may be of significance to practitioners in the applied linguistics community in their quest to establish academic reputation and in their pursuit of professional development. These findings may also be of use in triggering productive community discussion as a result of the questions they raise concerning the present nature of academic debate. Looking beyond the construction and status of the texts themselves, I inquire into the kind of ideational and social organization such exchanges keep in place and examine an alternative view of interaction. This study breaks new ground in two major ways. To the best of my knowledge, it is the first exploration of a bipartite, intertextual structure laying claim to genre status. Secondly, in its recourse to the comments of the writers’ themselves rather than relying exclusively on the evidence of their texts, as is the case with most studies of genre, this thesis offers an expanded opportunity to discuss perhaps the most interesting aspects of genre analysis – the light it throws on social ends and the role of genre in determining the nature of current academic debate as it here emerges.
Resumo:
The finite element process is now used almost routinely as a tool of engineering analysis. From early days, a significant effort has been devoted to developing simple, cost effective elements which adequately fulfill accuracy requirements. In this thesis we describe the development and application of one of the simplest elements available for the statics and dynamics of axisymmetric shells . A semi analytic truncated cone stiffness element has been formulated and implemented in a computer code: it has two nodes with five degrees of freedom at each node, circumferential variations in displacement field are described in terms of trigonometric series, transverse shear is accommodated by means of a penalty function and rotary inertia is allowed for. The element has been tested in a variety of applications in the statics and dynamics of axisymmetric shells subjected to a variety of boundary conditions. Good results have been obtained for thin and thick shell cases .
Resumo:
Fast pyrolysis of biomass produces a liquid bio-oil that can be used for electricity generation. Bio-oil can be stored and transported so it is possible to decouple the pyrolysis process from the generation process. This allows each process to be separately optimised. It is necessary to have an understanding of the transport costs involved in order to carry out techno-economic assessments of combinations of remote pyrolysis plants and generation plants. Published fixed and variable costs for freight haulage have been used to calculate the transport cost for trucks running between field stores and a pyrolysis plant. It was found that the key parameter for estimating these costs was the number of round trips a day a truck could make rather than the distance covered. This zone costing approach was used to estimate the transport costs for a range of pyrolysis plants size for willow woodchips and baled miscanthus. The possibility of saving transport costs by producing bio-oil near to the field stores and transporting the bio-oil to a central plant was investigated and it was found that this would only be cost effective for large generation plants.