888 resultados para Web modelling methods
Resumo:
This accessible, practice-oriented and compact text provides a hands-on introduction to the principles of market research. Using the market research process as a framework, the authors explain how to collect and describe the necessary data and present the most important and frequently used quantitative analysis techniques, such as ANOVA, regression analysis, factor analysis, and cluster analysis. An explanation is provided of the theoretical choices a market researcher has to make with regard to each technique, as well as how these are translated into actions in IBM SPSS Statistics. This includes a discussion of what the outputs mean and how they should be interpreted from a market research perspective. Each chapter concludes with a case study that illustrates the process based on real-world data. A comprehensive web appendix includes additional analysis techniques, datasets, video files and case studies. Several mobile tags in the text allow readers to quickly browse related web content using a mobile device.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
Financial prediction has attracted a lot of interest due to the financial implications that the accurate prediction of financial markets can have. A variety of data driven modellingapproaches have been applied but their performance has produced mixed results. In this study we apply both parametric (neural networks with active neurons) and nonparametric (analog complexing) self-organisingmodelling methods for the daily prediction of the exchangerate market. We also propose acombinedapproach where the parametric and nonparametricself-organising methods are combined sequentially, exploiting the advantages of the individual methods with the aim of improving their performance. The combined method is found to produce promising results and to outperform the individual methods when tested with two exchangerates: the American Dollar and the Deutche Mark against the British Pound.
Resumo:
Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.
Resumo:
The integration of a microprocessor and a medium power stepper motor in one control system brings together two quite different disciplines. Various methods of interfacing are examined and the problems involved in both hardware and software manipulation are investigated. Microprocessor open-loop control of the stepper motor is considered. The possible advantages of microprocessor closed-loop control are examined and the development of a system is detailed. The system uses position feedback to initiate each motor step. Results of the dynamic response of the system are presented and its performance discussed. Applications of the static torque characteristic of the stepper motor are considered followed by a review of methods of predicting the characteristic. This shows that accurate results are possible only when the effects of magnetic saturation are avoided or when the machine is available for magnetic circuit tests to be carried out. A new method of predicting the static torque characteristic is explained in detail. The method described uses the machine geometry and the magnetic characteristics of the iron types used in the machine. From this information the permeance of each iron component of the machine is calculated and by using the equivalent magnetic circuit of the machine, the total torque produced is predicted. It is shown how this new method is implemented on a digital computer and how the model may be used to investigate further aspects of the stepper motor in addition to the static torque.
Resumo:
The research carried out in this thesis was mainly concerned with the effects of large induction motors and their transient performance in power systems. Computer packages using the three phase co-ordinate frame of reference were developed to simulate the induction motor transient performance. A technique using matrix algebra was developed to allow extension of the three phase co-ordinate method to analyse asymmetrical and symmetrical faults on both sides of the three phase delta-star transformer which is usually required when connecting large induction motors to the supply system. System simulation, applying these two techniques, was used to study the transient stability of a power system. The response of a typical system, loaded with a group of large induction motors, two three-phase delta-star transformers, a synchronous generator and an infinite system was analysed. The computer software developed to study this system has the advantage that different types of fault at different locations can be studied by simple changes in input data. The research also involved investigating the possibility of using different integrating routines such as Runge-Kutta-Gill, RungeKutta-Fehlberg and the Predictor-Corrector methods. The investigation enables the reduction of computation time, which is necessary when solving the induction motor equations expressed in terms of the three phase variables. The outcome of this investigation was utilised in analysing an introductory model (containing only minimal control action) of an isolated system having a significant induction motor load compared to the size of the generator energising the system.
Resumo:
The purpose of the work described here has been to seek methods of narrowing the present gap between currently realised heat pump performance and the theoretical limit. The single most important pre-requisite to this objective is the identification and quantitative assessment of the various non-idealities and degradative phenomena responsible for the present shortfall. The use of availability analysis has been introduced as a diagnostic tool, and applied to a few very simple, highly idealised Rankine cycle optimisation problems. From this work, it has been demonstrated that the scope for improvement through optimisation is small in comparison with the extensive potential for improvement by reducing the compressor's losses. A fully instrumented heat pump was assembled and extensively tested. This furnished performance data, and led to an improved understanding of the systems behaviour. From a very simple analysis of the resulting compressor performance data, confirmation of the compressor's low efficiency was obtained. In addition, in order to obtain experimental data concerning specific details of the heat pump's operation, several novel experiments were performed. The experimental work was concluded with a set of tests which attempted to obtain definitive performance data for a small set of discrete operating conditions. These tests included an investigation of the effect of two compressor modifications. The resulting performance data was analysed by a sophisticated calculation which used that measurements to quantify each dagradative phenomenon occurring in that compressor, and so indicate where the greatest potential for improvement lies. Finally, in the light of everything that was learnt, specific technical suggestions have been made, to reduce the losses associated with both the refrigerant circuit and the compressor.
Resumo:
Experimental investigations and computer modelling studies have been made on the refrigerant-water counterflow condenser section of a small air to water heat pump. The main object of the investigation was a comparative study between the computer modelling predictions and the experimental observations for a range of operating conditions but other characteristics of a counterflow heat exchanger are also discussed. The counterflow condenser consisted of 15 metres of a thermally coupled pair of copper pipes, one containing the R12 working fluid and the other water flowing in the opposite direction. This condenser was mounted horizontally and folded into 0.5 metre straight sections. Thermocouples were inserted in both pipes at one metre intervals and transducers for pressure and flow measurement were also included. Data acquisition, storage and analysis was carried out by a micro-computer suitably interfaced with the transducers and thermocouples. Many sets of readings were taken under a variety of conditions, with air temperature ranging from 18 to 26 degrees Celsius, water inlet from 13.5 to 21.7 degrees, R12 inlet temperature from 61.2 to 81.7 degrees and water mass flow rate from 6.7 to 32.9 grammes per second. A Fortran computer model of the condenser (originally prepared by Carrington[1]) has been modified to match the information available from experimental work. This program uses iterative segmental integration over the desuperheating, mixed phase and subcooled regions for the R12 working fluid, the water always being in the liquid phase. Methods of estimating the inlet and exit fluid conditions from the available experimental data have been developed for application to the model. Temperature profiles and other parameters have been predicted and compared with experimental values for the condenser for a range of evaporator conditions and have shown that the model gives a satisfactory prediction of the physical behaviour of a simple counterflow heat exchanger in both single phase and two phase regions.
Resumo:
This thesis presents a theoretical investigation of the application of advanced modelling formats in high-speed fibre lightwave systems. The first part of this work focuses on numerical optimisation of dense wavelength division multiplexing (DWDM) system design. We employ advanced spectral domain filtering techniques and carrier pulse reshaping. We then apply these optimisation methods to investigate spectral and temporal domain characteristics of advanced modulation formats in fibre optic telecommunication systems. Next we investigate numerical methods used in detecting and measuring the system performance of advanced modulation formats. We then numerically study the combination of return-to-zero differential phase-shift keying (RZ-DPSK) with advanced photonic devices. Finally we analyse the dispersion management of Nx40 Gbit/s RZ-DPSK transmission applied to a commercial terrestrial lightwave system.
Resumo:
This is a study of heat transfer in a lift-off furnace which is employed in the batch annealing of a stack of coils of steel strip. The objective of the project is to investigate the various factors which govern the furnace design and the heat transfer resistances, so as to reduce the time of the annealing cycle, and hence minimize the operating costs. The work involved mathematical modelling of patterns of gas flow and modes of heat transfer. These models are: Heat conduction and its conjectures in the steel coils;Convective heat transfer in the plates separating the coils in the stack and in other parts of the furnace; and Radiative and convective heat transfer in the furnace by using the long furnace model. An important part of the project is the development of numerical methods and computations to solve the transient models. A limited number of temperature measurements was available from experiments on a test coil in an industrial furnace. The mathematical model agreed well with these data. The model has been used to show the following characteristics of annealing furnaces, and to suggest further developments which would lead to significant savings: - The location of the limiting temperature in a coil is nearer to the hollow core than to the outer periphery. - Thermal expansion of the steel tends to open the coils, reduces their thermal conductivity in the radial direction, and hence prolongs the annealing cycle. Increasing the tension in the coils and/or heating from the core would overcome this heat transfer resistance. - The shape and dimensions of the convective channels in the plates have significant effect on heat convection in the stack. An optimal design of a channel is shown to be of a width-to-height ratio equal to 9. - Increasing the cooling rate, by using a fluidized bed instead of the normal shell and tube exchanger, would shorten the cooling time by about 15%, but increase the temperature differential in the stack. - For a specific charge weight, a stack of different-sized coils will have a shorter annealing cycle than one of equally-sized coils, provided that production constraints allow the stacking order to be optimal. - Recycle of hot flue gases to the firing zone of the furnace would produce a. decrease in the thermal efficiency up to 30% but decreases the heating time by about 26%.
Resumo:
It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which accounts for input noise provided that a model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method adds an extra term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable, and sampling this jointly with the network’s weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. This leads to the possibility of training an accurate model of a system using less accurate, or more uncertain, data. This is demonstrated on both the, synthetic, noisy sine wave problem and a real problem of inferring the forward model for a satellite radar backscatter system used to predict sea surface wind vectors.
Resumo:
INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
Many of the applications of geometric modelling are concerned with the computation of well-defined properties of the model. The applications which have received less attention are those which address questions to which there is no unique answer. This thesis describes such an application: the automatic production of a dimensioned engineering drawing. One distinctive feature of this operation is the requirement for sophisticated decision-making algorithms at each stage in the processing of the geometric model. Hence, the thesis is focussed upon the design, development and implementation of such algorithms. Various techniques for geometric modelling are briefly examined and then details are given of the modelling package that was developed for this project, The principles of orthographic projection and dimensioning are treated and some published work on the theory of dimensioning is examined. A new theoretical approach to dimensioning is presented and discussed. The existing body of knowledge on decision-making is sampled and the author then shows how methods which were originally developed for management decisions may be adapted to serve the purposes of this project. The remainder of the thesis is devoted to reports on the development of decision-making algorithms for orthographic view selection, sectioning and crosshatching, the preparation of orthographic views with essential hidden detail, and two approaches to the actual insertion of dimension lines and text. The thesis concludes that the theories of decision-making can be applied to work of this kind. It may be possible to generate computer solutions that are closer to the optimum than some man-made dimensioning schemes. Further work on important details is required before a commercially acceptable package could be produced.
Resumo:
For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.