24 resultados para Runs

em Aston University Research Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is the second edition of our Aston Business School (ABS) Good Practice Guide and the enthusiasm of the contributors appears undiminished. I am again reminded that I work with a group of very committed, dedicated and professional colleagues. Once again this publication is produced to celebrate and promote good teaching across the School and to offer encouragement to those imaginative and innovative staff who continue to wish to challenge students to learn to maximum effect. It is hoped that others will pick up some good ideas from the articles contained in this volume. Contributors to this Guide were not chosen because they are the best teachers in the School, although they are undoubtedly all amongst my colleagues who are exponents of enthusiastic and inspiring approaches to learning. The Quality Unit approached these individuals because they declared on their Annual Module Reflection Forms that they were doing something interesting and worthwhile which they thought others might find useful. Amongst those reading the Guide I am sure that there are many other individuals who are trying to operate similar examples of good practice in their teaching, learning and assessment methods. I hope that this publication will provoke these people into providing comments and articles of their own and that these will form the basis of next year’s Guide. It may also provoke some people to try these methods in their own teaching. The themes of the articles this year can be divided into two groups. The first theme is the quest to help students to help themselves to learn via student-run tutorials, surprise tests and mock examinations linked with individual tutorials. The second theme is making learning come to life in exciting practical ways by, for example, hands-on workshops and simulations, story telling, rhetorical questioning and discussion groups. A common theme is one of enthusiasm, reflection and commitment on behalf of the lecturers concerned. None of the approaches discussed in this publication are low effort activities on the part of the facilitator, but this effort is regarded as worthwhile as a means of creating greater student engagement. As Biggs (2003)[1] says, in his similarly inspiring way, students learn more the less passive they are in their learning. (Ref). The articles in this publication bear witness of this and much more. Since last year Aston Business School has launched its Research Centre in Higher Education Learning and Management (HELM) which is another initiative to promote excellent learning and teaching. Even before this institution has become fully operational, at least one of the articles in this publication has seen the light of day in the research arena and at least two others are ripe for dissemination to a wider audience via journal publication. More news of our successes in this activity will appear in next year’s edition. May I thank the contributors for taking time out of their busy schedules to write the articles this summer, and to Julie Green who runs the ABS Quality Unit, for putting our diverse approaches into a coherent and publishable form and for chasing us when we have needed it! I would also like to thank Ann Morton and her colleagues in the Centre for Staff Development who have supported this publication. During the last year the Centre has further stimulated the learning and teaching life of the School (and the wider University) via their Learning and Teaching Week and sponsorship of Teaching Quality Enhancement Fund (TQEF) projects. Pedagogic excellence is in better health at Aston than ever before – long may this be because this is what life in HE should be about.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The operator hairpin ahead of the replicase gene in RNA bacteriophage MS2 contains overlapping signals for binding the coat protein and ribosomes. Coat protein binding inhibits further translation of the gene and forms the first step in capsid formation. The hairpin sequence was partially randomized to assess the importance of this structure element for the bacteriophage and to monitor alternative solutions that would evolve on the passaging of mutant phages. The evolutionary reconstruction of the operator failed in the majority of mutants. Instead, a poor imitation developed containing only some of the recognition signals for the coat protein. Three mutants were of particular interest in that they contained double nonsense codons in the lysis reading frame that runs through the operator hairpin. The simultaneous reversion of two stop codons into sense codons has a very low probability of occurring. Therefore the phage solved the problem by deleting the nonsense signals and, in fact, the complete operator, except for the initiation codon of the replicase gene. Several revertants were isolated with activities ranging from 1% to 20% of wild type. The operator, long thought to be a critical regulator, now appears to be a dispensable element. In addition, the results indicate how RNA viruses can be forced to step back to an attenuated form.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - This paper provides a deeper examination of the fundamentals of commonly-used techniques - such as coefficient alpha and factor analysis - in order to more strongly link the techniques used by marketing and social researchers to their underlying psychometric and statistical rationale. Design/methodology approach - A wide-ranging review and synthesis of psychometric and other measurement literature both within and outside the marketing field is used to illuminate and reconsider a number of misconceptions which seem to have evolved in marketing research. Findings - The research finds that marketing scholars have generally concentrated on reporting what are essentially arbitrary figures such as coefficient alpha, without fully understanding what these figures imply. It is argued that, if the link between theory and technique is not clearly understood, use of psychometric measure development tools actually runs the risk of detracting from the validity of the measures rather than enhancing it. Research limitations/implications - The focus on one stage of a particular form of measure development could be seen as rather specialised. The paper also runs the risk of increasing the amount of dogma surrounding measurement, which runs contrary to the spirit of this paper. Practical implications - This paper shows that researchers may need to spend more time interpreting measurement results. Rather than simply referring to precedence, one needs to understand the link between measurement theory and actual technique. Originality/value - This paper presents psychometric measurement and item analysis theory in easily understandable format, and offers an important set of conceptual tools for researchers in many fields. © Emerald Group Publishing Limited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fundamental tenet of Leader–Member Exchange (LMX) theory is that leaders develop different quality relationships with their employees; however, little research has investigated the impact of LMX differentiation on employee reactions. The current research investigates whether perceptions of LMX variability (the extent to which LMX relationships are perceived to vary within a team) affects employee job satisfaction and wellbeing beyond the effects of personal LMX quality. As LMX variability runs counter to principles of equality and consistency, which are important for maintaining social harmony in groups, it is hypothesized that perceptions of LMX variability will have a negative effect on employee reactions, via its negative impact on perceived team relations. Two samples of employed individuals were used to investigate the hypothesized relationships. In both samples, an individual's perception of LMX variability in their team was negatively related to employee job satisfaction and wellbeing (above the effects of LMX), and this relationship was mediated by reports of relational team conflict.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background & Aims: Current models of visceral pain processing derived from metabolic brain imaging techniques fail to differentiate between exogenous (stimulus-dependent) and endogenous (non-stimulus-specific) neural activity. The aim of this study was to determine the spatiotemporal correlates of exogenous neural activity evoked by painful esophageal stimulation. Methods: In 16 healthy subjects (8 men; mean age, 30.2 ± 2.2 years), we recorded magnetoencephalographic responses to 2 runs of 50 painful esophageal electrical stimuli originating from 8 brain subregions. Subsequently, 11 subjects (6 men; mean age, 31.2 ± 1.8 years) had esophageal cortical evoked potentials recorded on a separate occasion by using similar experimental parameters. Results: Earliest cortical activity (P1) was recorded in parallel in the primary/secondary somatosensory cortex and posterior insula (∼85 ms). Significantly later activity was seen in the anterior insula (∼103 ms) and cingulate cortex (∼106 ms; P = .0001). There was no difference between the P1 latency for magnetoencephalography and cortical evoked potential (P = .16); however, neural activity recorded with cortical evoked potential was longer than with magnetoencephalography (P = .001). No sex differences were seen for psychophysical or neurophysiological measures. Conclusions: This study shows that exogenous cortical neural activity evoked by experimental esophageal pain is processed simultaneously in somatosensory and posterior insula regions. Activity in the anterior insula and cingulate - brain regions that process the affective aspects of esophageal pain - occurs significantly later than in the somatosensory regions, and no sex differences were observed with this experimental paradigm. Cortical evoked potential reflects the summation of cortical activity from these brain regions and has sufficient temporal resolution to separate exogenous and endogenous neural activity. © 2005 by the American Gastroenterological Association.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compare the Q parameter obtained from scalar, semi-analytical and full vector models for realistic transmission systems. One set of systems is operated in the linear regime, while another is using solitons at high peak power. We report in detail on the different results obtained for the same system using different models. Polarisation mode dispersion is also taken into account and a novel method to average Q parameters over several independent simulation runs is described. © 2006 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The literature on the potential use of liquid ammonia as a solvent for the extraction of aromatic hydrocarbons from mixtures with paraffins, and the application of reflux, has been reviewed. Reference is made to extractors suited to this application. A pilot scale extraction plant was designed comprising a Scm. diameter by 12Scm. high, 50 stage Rotating Disc Contactor with 2 external settlers. Provision was made for operation with, or without, reflux at a pressure of 10 bar and ambient temperature. The solvent recovery unit consisted of an evaporator, compressor and condenser in a refrigeration cycle. Two systems were selected for study, Cumene-n-Heptane-Ammonia and Toluene-Methylcyclohexane-Ammonia. Equlibrium data for the first system was determined experimentally in a specially-designed, equilibrium bomb. A technique was developed to withdraw samples under pressure for analysis by chromatography and titration. The extraction plant was commissioned with a kerosine-water system; detailed operating procedures were developed based on a Hazard and Operability Study. Experimental runs were carried out with both ternary ammonia systems. With the system Toluene-Methylcyclohexane-Ammonia the extraction plant and the solvent recovery facility, operated satisfactorily, and safely,in accordance with the operating procedures. Experimental data gave reasonable agreement with theory. Recommendations are made for further work with plant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to design, construct, test and operate a novel circulating fluid bed fast pyrolysis reactor system for production of liquids from biomass. The novelty lies in incorporating an integral char combustor to provide autothermal operation. A reactor design methodology was devised which correlated input parameters to process variables, namely temperature, heat transfer and gas/vapour residence time, for both the char combustor and biomass pyrolyser. From this methodology a CFB reactor was designed with integral char combustion for 10 kg/h biomass throughput. A full-scale cold model of the CFB unit was constructed and tested to derive suitable hydrodynamic relationships and performance constraints. Early difficulties encountered with poor solids circulation and inefficient product recovery were overcome by a series of modifications. A total of 11 runs in a pyrolysis mode were carried out with a maximum total liquids yield of 61.50% wt on a maf biomass basis, obtained at 500°C and with 0.46 s gas/vapour residence time. This could be improved by improved vapour recovery by direct quenching up to an anticipated 75 % wt on a moisture-and-ash-free biomass basis. The reactor provides a very high specific throughput of 1.12 - 1.48 kg/hm2 and the lowest gas-to-feed ratio of 1.3 - 1.9 kg gas/kg feed compared to other fast pyrolysis processes based on pneumatic reactors and has a good scale-up potential. These features should provide significant capital cost reduction. Results to date suggest that the process is limited by the extent of char combustion. Future work will address resizing of the char combustor to increase overall system capacity, improvement in solid separation and substantially better liquid recovery. Extended testing will provide better evaluation of steady state operation and provide data for process simulation and reactor modeling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A total pressure apparatus has been developed to measure vapour-liquid equilibrium data on binary mixtures at atmospheric and sub-atmospheric pressures. The method gives isothermal data which can be obtained rapidly. Only measurements of total pressure are made as a direct function of composition of synthetic liquid phase composition, the vapour phase composition being deduced through the Gibbs-Duhem relationship. The need to analyse either of the phases is eliminated. As such the errors introduced by sampling and analysis are removed. The essential requirements are that the pure components be degassed completely since any deficiency in degassing would introduce errors into the measured pressures. A similarly essential requirement was that the central apparatus would have to be absolutely leak-tight as any leakage of air either in or out of the apparatus would introduce erroneous pressure readings. The apparatus was commissioned by measuring the saturated vapour pressures of both degassed water and ethanol as a function of temperature. The pressure-temperature data on degassed water measured were directly compared with data in the literature, with good agreement. Similarly the pressure-temperature data were measured for ethanol, methanol and cyclohexane and where possible a direct comparison made with the literature data. Good agreement between the pure component data of this work and those available in the literature demonstrates firstly that a satisfactory degassing procedure has been achieved and that secondly the measurements of pressure-temperature are consistent for any one component; since this is true for a number of components, the measurements of both temperature and pressure are both self-consistent and of sufficient accuracy, with an observed compatibility between the precision/accuracy of the separate means of measuring pressure and temperature. The liquid mixtures studied were of ethanol-water, methanol-water and ethanol-cyclohexane. The total pressure was measured as the composition inside the equilibrium cell was varied at a set temperature. This gave P-T-x data sets for each mixture at a range of temperatures. A standard fitting-package from the literature was used to reduce the raw data to yield y-values to complete the x-y-P-T data sets. A consistency test could not be applied to the P-T-x data set as no y-values were obtained during the experimental measurements. In general satisfactory agreement was found between the data of this work and those available in the literature. For some runs discrepancies were observed, and further work recommended to eliminate the problems identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objectives of this research were to investigate the perforamnce of a rubberwood gasifier and engine with electricity generation and to identify opportunities for the implementation of such a system in Malaysia. The experimental work included the design, fabrication and commissioning of a throated downdraft gasifier in Malaysia. The gasifier was subsequently used to investigate the effect of moisture content, dry wood capacity and particle size of rubberwood on gasifier performance. Additional experiments were also conducted to investigate the influence of two different nozzle numbers and two different throat diameters on tar cracking. A total of 101 runs were completed during the duration of the research. From the experimental data, the average mass balance was found to be 92.65%. The average energy balance over the gasifier to hot raw gas was 98.7%, to cold clean gas was 102.4% and over the complete system was 101.9%. The heat loss from the gasifier was estimated to range from 10-26% of the chemical energy of the feedstock. From the downstream operation, the heat loss was estimated to range from 17-37% of the chemical energy of rubberwood feedstock. The maximum throughput for stable operation was found to be 60-70% of the maximum dry wood capacity. The gasifier was found to have a maximum turndown ratio of 5:1. It is also postulated that the phenomenon of turndown of the gasifier is due to a `bubble theory' occurring at the gasification zone, and this hypothesis is explained. For stable power output, the working range of the engine was found to be 5-33.5 kWe. The thermal efficiency and diesel displacement of the engine was found to be 17-18% and 65-70% respectively. The research also showed that rubberwood gasification in Malaysia is feasible if the price of diesel is above MR35/l and the price of wood is below MR120/tonne.