82 resultados para Models performance
Resumo:
This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.
Resumo:
Over recent years much has been learned about the way in which depth cues are combined (e.g. Landy et al., 1995). The majority of this work has used subjective measures, a rating scale or a point of subjective equality, to deduce the relative contributions of different cues to perception. We have adopted a very different approach by using two interval forced-choice (2IFC) performance measures and a signal processing framework. We performed summation experiments for depth cue increment thresholds between pairs of pictorial depth cues in displays depicting slanted planar surfaces made from arrays of circular 'contrast' elements. Summation was found to be ideal when size-gradient was paired with contrast-gradient for a wide range of depth-gradient magnitudes in the null stimulus. For a pairing of size-gradient and linear perspective, substantial summation (> 1.5 dB) was found only when the null stimulus had intermediate depth gradients; when flat or steeply inclined surfaces were depicted, summation was diminished or abolished. Summation was also abolished when one of the target cues was (i) not a depth cue, or (ii) added in conflict. We conclude that vision has a depth mechanism for the constructive combination of pictorial depth cues and suggest two generic models of summation to describe the results. Using similar psychophysical methods, Bradshaw and Rogers (1996) revealed a mechanism for the depth cues of motion parallax and binocular disparity. Whether this is the same or a different mechanism from the one reported here awaits elaboration.
Resumo:
The objective of the thesis was to analyse several process configurations for the production of electricity from biomass. Process simulation models using AspenPlus aimed at calculating the industrial performance of power plant concepts were built, tested, and used for analysis. The criteria used in analysis were performance and cost. All of the advanced systems appear to have higher efficiencies than the commercial reference, the Rankine cycle. However, advanced systems typically have a higher cost of electricity (COE) than the Rankine power plant. High efficiencies do not reduce fuel costs enough to compensate for the high capital costs of advanced concepts. The successful reduction of capital costs would appear to be the key to the introduction of the new systems. Capital costs account for a considerable, often dominant, part of the cost of electricity in these concepts. All of the systems have higher specific investment costs than the conventional industrial alternative, i.e. the Rankine power plant; Combined beat and power production (CUP) is currently the only industrial area of application in which bio-power costs can be considerably reduced to make them competitive. Based on the results of this work, AsperiPlus is an appropriate simulation platform. How-ever, the usefulness of the models could be improved if a number of unit operations were modelled in greater detail. The dryer, gasifier, fast pyrolysis, gas engine and gas turbine models could be improved.
Resumo:
Current methods for retrieving near-surface winds from scatterometer observations over the ocean surface require a forward sensor model which maps the wind vector to the measured backscatter. This paper develops a hybrid neural network forward model, which retains the physical understanding embodied in CMOD4, but incorporates greater flexibility, allowing a better fit to the observations. By introducing a separate model for the midbeam and using a common model for the fore and aft beams, we show a significant improvement in local wind vector retrieval. The hybrid model also fits the scatterometer observations more closely. The model is trained in a Bayesian framework, accounting for the noise on the wind vector inputs. We show that adding more high wind speed observations in the training set improves wind vector retrieval at high wind speeds without compromising performance at medium or low wind speeds. Copyright 2001 by the American Geophysical Union.
Resumo:
Accurate prediction of shellside pressure drop in a baffled shell-and-tube heat exchanger is very difficult because of the complicated shellside geometry. Ideally, all the shellside fluid should be alternately deflected across the tube bundle as it traverses from inlet to outlet. In practice, up to 60% of the shellside fluid may bypass the tube bundle or leak through the baffles. This short-circuiting of the main flow reduces the efficiency of the exchanger. Of the various shellside methods, it is shown that only the multi-stream methods, which attempt to obtain the shellside flow distribution, predict the pressure drop with any degree of accuracy, the various predictions ranging from -30% to +70%, generally overpredicting. It is shown that the inaccuracies are mainly due to the manner in which baffle leakage is modelled. The present multi-stream methods do not allow for interactions of the various flowstreams, and yet it is shown that three main effects are identified, a) there is a strong interaction between the main cross flow and the baffle leakage streams, enhancing the crossflow pressure drop, b) there is a further short-circuit not considered previously i.e. leakage in the window, and c) the crossflow does not penetrate as far, on average, as previously supposed. Models are developed for each of these three effects, along with a new windowflow pressure drop model, and it is shown that the effect of baffle leakage in the window is the most significant. These models developed to allow for various interactions, lead to an improved multi-stream method, named the "STREAM-INTERACTION" method. The overall method is shown to be consistently more accurate than previous methods, with virtually all the available shellside data being predicted to within ±30% and over 60% being within ±20%. The method is, thus, strongly recommended for use as a design method.
Resumo:
This work is concerned with the nature of liquid flow across industrial sieve trays operating in the spray, mixed, and the emulsified flow regimes. In order to overcome the practical difficulties of removing many samples from a commercial tray, the mass transfer process was investigated in an air water simulator column by heat transfer analogy. The temperature of the warm water was measured by many thermocouples as the water flowed across the single pass 1.2 m diameter sieve tray. The thermocouples were linked to a mini computer for the storage of the data. The temperature data were then transferred to a main frame computer to generate temperature profiles - analogous to concentration profiles. A comprehensive study of the existing tray efficiency models was carried out using computerised numerical solutions. The calculated results were compared with experimental results published by the Fractionation Research Incorporation (FRl) and the existing models did not show any agreement with the experimental results. Only the Porter and Lockett model showed a reasonable agreement with the experimental results for cenain tray efficiency values. A rectangular active section tray was constructed and tested to establish the channelling effect and the result of its effect on circular tray designs. The developed flow patterns showed predominantly flat profiles and some indication of significant liquid flow through the central region of the tray. This comfirms that the rectangular tray configuration might not be a satisfactory solution for liquid maldistribution on sieve trays. For a typical industrial tray the flow of liquid as it crosses the tray from the inlet to the outlet weir could be affected by the mixing of liquid by the eddy, momentum and the weir shape in the axial or the transverse direction or both. Conventional U-shape profiles were developed when the operating conditions were such that the froth dispersion was in the mixed regime, with good liquid temperature distribution while in the spray regime. For the 12.5 mm hole diameter tray the constant temperature profiles were found to be in the axial direction while in the spray regime and in the transverse direction for the 4.5 mm hole tray. It was observed that the extent of the liquid stagnant zones at the sides of the tray depended on the tray hole diameter and was larger for the 4.5 mm hole tray. The liquid hold-up results show a high liquid hold-up at the areas of the tray with low liquid temperatures, this supports the doubts about the assumptions of constant point efficiency across an operating tray. Liquid flow over the outlet weir showed more liquid flow at the centre of the tray at high liquid loading with low liquid flow at both ends of the weir. The calculated results of the point and tray efficiency model showed a general increase in the calculated point and tray efficiencies with an increase in the weir loading, as the flow regime changed from the spray to the mixed regime the point and the tray efficiencies increased from approximately 30 to 80%.Through the mixed flow regime the efficiencies were found to remain fairly constant, and as the operating conditions were changed to maintain an emulsified flow regime there was a decrease in the resulting efficiencies. The results of the estimated coefficient of mixing for the small and large hole diameter trays show that the extent of liquid mixing on an operating tray generally increased with increasing capacity factor, but decreased with increasing weir loads. This demonstrates that above certain weir loads, the effect of eddy diffusion mechanism on the process of liquid mixing on an operating tray to be negligible.
Resumo:
This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.
Resumo:
This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.
Resumo:
High velocity oxyfuel (HVOF) thermal spraying is one of the most significant developments in the thermal spray industry since the development of the original plasma spray technique. The first investigation deals with the combustion and discrete particle models within the general purpose commercial CFD code FLUENT to solve the combustion of kerosene and couple the motion of fuel droplets with the gas flow dynamics in a Lagrangian fashion. The effects of liquid fuel droplets on the thermodynamics of the combusting gas flow are examined thoroughly showing that combustion process of kerosene is independent on the initial fuel droplet sizes. The second analysis copes with the full water cooling numerical model, which can assist on thermal performance optimisation or to determine the best method for heat removal without the cost of building physical prototypes. The numerical results indicate that the water flow rate and direction has noticeable influence on the cooling efficiency but no noticeable effect on the gas flow dynamics within the thermal spraying gun. The third investigation deals with the development and implementation of discrete phase particle models. The results indicate that most powder particles are not melted upon hitting the substrate to be coated. The oxidation model confirms that HVOF guns can produce metallic coating with low oxidation within the typical standing-off distance about 30cm. Physical properties such as porosity, microstructure, surface roughness and adhesion strength of coatings produced by droplet deposition in a thermal spray process are determined to a large extent by the dynamics of deformation and solidification of the particles impinging on the substrate. Therefore, is one of the objectives of this study to present a complete numerical model of droplet impact and solidification. The modelling results show that solidification of droplets is significantly affected by the thermal contact resistance/substrate surface roughness.
Resumo:
Existing theories of semantic cognition propose models of cognitive processing occurring in a conceptual space, where ‘meaning’ is derived from the spatial relationships between concepts’ mapped locations within the space. Information visualisation is a growing area of research within the field of information retrieval, and methods for presenting database contents visually in the form of spatial data management systems (SDMSs) are being developed. This thesis combined these two areas of research to investigate the benefits associated with employing spatial-semantic mapping (documents represented as objects in two- and three-dimensional virtual environments are proximally mapped dependent on the semantic similarity of their content) as a tool for improving retrieval performance and navigational efficiency when browsing for information within such systems. Positive effects associated with the quality of document mapping were observed; improved retrieval performance and browsing behaviour were witnessed when mapping was optimal. It was also shown using a third dimension for virtual environment (VE) presentation provides sufficient additional information regarding the semantic structure of the environment that performance is increased in comparison to using two-dimensions for mapping. A model that describes the relationship between retrieval performance and browsing behaviour was proposed on the basis of findings. Individual differences were not found to have any observable influence on retrieval performance or browsing behaviour when mapping quality was good. The findings from this work have implications for both cognitive modelling of semantic information, and for designing and testing information visualisation systems. These implications are discussed in the conclusions of this work.
Resumo:
The principles of High Performance Liquid Chromatography (HPLC) and pharmacokinetics were applied to the use of several clinically-important drugs at the East Birmingham Hospital. Amongst these was gentamicin, which was investigated over a two-year period by a multi-disciplinary team. It was found that there was considerable intra- and inter-patient variation that had not previously been reported and the causes and consequences of such variation were considered. A detailed evaluation of available pharmacokinetic techniques was undertaken and 1- and 2-compartment models were optimised with regard to sampling procedures, analytical error and model-error. The implications for control of therapy are discussed and an improved sampling regime is proposed for routine usage. Similar techniques were applied to trimethoprim, assayed by HPLC, in patients with normal renal function and investigations were also commenced into the penetration of drug into peritoneal dialysate. Novel assay techniques were also developed for a range of drugs including 4-aminopyridine, chloramphenicol, metronidazole and a series of penicillins and cephalosporins. Stability studies on cysteamine, reaction-rate studies on creatinine-picrate and structure-activity relationships in HPLC of aminopyridines are also reported.
Resumo:
The work presented in this thesis is concerned with the dynamic behaviour of structural joints which are both loaded, and excited, normal to the joint interface. Since the forces on joints are transmitted through their interface, the surface texture of joints was carefully examined. A computerised surface measuring system was developed and computer programs were written. Surface flatness was functionally defined, measured and quantised into a form suitable for the theoretical calculation of the joint stiffness. Dynamic stiffness and damping were measured at various preloads for a range of joints with different surface textures. Dry clean and lubricated joints were tested and the results indicated an increase in damping for the lubricated joints of between 30 to 100 times. A theoretical model for the computation of the stiffness of dry clean joints was built. The model is based on the theory that the elastic recovery of joints is due to the recovery of the material behind the loaded asperities. It takes into account, in a quantitative manner, the flatness deviations present on the surfaces of the joint. The theoretical results were found to be in good agreement with those measured experimentally. It was also found that theoretical assessment of the joint stiffness could be carried out using a different model based on the recovery of loaded asperities into a spherical form. Stepwise procedures are given in order to design a joint having a particular stiffness. A theoretical model for the loss factor of dry clean joints was built. The theoretical results are in reasonable agreement with those experimentally measured. The theoretical models for the stiffness and loss factor were employed to evaluate the second natural frequency of the test rig. The results are in good agreement with the experimentally measured natural frequencies.
Resumo:
This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.
Resumo:
This thesis proposes that despite many experimental studies of thinking, and the development of models of thinking, such as Bruner's (1966) enactive, iconic and symbolic developmental modes, the imagery and inner verbal strategies used by children need further investigation to establish a coherent, theoretical basis from which to create experimental curricula for direct improvement of those strategies. Five hundred and twenty-three first, second and third year comprehensive school children were tested on 'recall' imagery, using a modified Betts Imagery Test; and a test of dual-coding processes (Paivio, 1971, p.179), by the P/W Visual/Verbal Questionnaire, measuring 'applied imagery' and inner verbalising. Three lines of investigation were pursued: 1. An investigation a. of hypothetical representational strategy differences between boys and girls; and b. the extent to which strategies change with increasing age. 2. The second and third year children's use of representational processes, were taken separately and compared with performance measures of perception, field independence, creativity, self-sufficiency and self-concept. 3. The second and third year children were categorised into four dual-coding strategy groups: a. High Visual/High Verbal b. Low Visual/High Verbal c. High Visual/Low Verbal d. Low Visual/Low Verbal These groups were compared on the same performance measures. The main result indicates that: 1. A hierarchy of dual-coding strategy use can be identified that is significantly related (.01, Binomial Test) to success or failure in the performance measures: the High Visual/High Verbal group registering the highest scores, the Low Visual/High Verbal and High Visual/Low Verbal groups registering intermediate scores, and the Low Visual/Low Verbal group registering the lowest scores on the performance measures. Subsidiary results indicate that: 2. Boys' use of visual strategies declines, and of verbal strategies increases, with age; girls' recall imagery strategy increases with age. Educational implications from the main result are discussed, the establishment of experimental curricula proposed, and further research suggested.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.