88 resultados para Data Modelling
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
The absence of a definitive approach to the design of manufacturing systems signifies the importance of a control mechanism to ensure the timely application of relevant design techniques. To provide effective control, design development needs to be continually assessed in relation to the required system performance, which can only be achieved analytically through computer simulation. The technique providing the only method of accurately replicating the highly complex and dynamic interrelationships inherent within manufacturing facilities and realistically predicting system behaviour. Owing to the unique capabilities of computer simulation, its application should support and encourage a thorough investigation of all alternative designs. Allowing attention to focus specifically on critical design areas and enabling continuous assessment of system evolution. To achieve this system analysis needs to efficient, in terms of data requirements and both speed and accuracy of evaluation. To provide an effective control mechanism a hierarchical or multi-level modelling procedure has therefore been developed, specifying the appropriate degree of evaluation support necessary at each phase of design. An underlying assumption of the proposal being that evaluation is quick, easy and allows models to expand in line with design developments. However, current approaches to computer simulation are totally inappropriate to support the hierarchical evaluation. Implementation of computer simulation through traditional approaches is typically characterized by a requirement for very specialist expertise, a lengthy model development phase, and a correspondingly high expenditure. Resulting in very little and rather inappropriate use of the technique. Simulation, when used, is generally only applied to check or verify a final design proposal. Rarely is the full potential of computer simulation utilized to aid, support or complement the manufacturing system design procedure. To implement the proposed modelling procedure therefore the concept of a generic simulator was adopted, as such systems require no specialist expertise, instead facilitating quick and easy model creation, execution and modification, through simple data inputs. Previously generic simulators have tended to be too restricted, lacking the necessary flexibility to be generally applicable to manufacturing systems. Development of the ATOMS manufacturing simulator, however, has proven that such systems can be relevant to a wide range of applications, besides verifying the benefits of multi-level modelling.
Resumo:
A study on heat pump thermodynamic characteristics has been made in the laboratory on a specially designed and instrumented air to water heat pump system. The design, using refrigerant R12, was based on the requirement to produce domestic hot water at a temperature of about 50 °C and was assembled in the laboratory. All the experimental data were fed to a microcomputer and stored on disk automatically from appropriate transducers via amplifier and 16 channel analogue to digital converters. The measurements taken were R12 pressures and temperatures, water and R12 mass flow rates, air speed, fan and compressor input powers, water and air inlet and outlet temperatures, wet and dry bulb temperatures. The time interval between the observations could be varied. The results showed, as expected, that the COP was higher at higher air inlet temperatures and at lower hot water output temperatures. The optimum air speed was found to be at a speed when the fan input power was about 4% of the condenser heat output. It was also found that the hot water can be produced at a temperature higher than the appropriate R12 condensing temperature corresponding to condensing pressure. This was achieved by condenser design to take advantage of discharge superheat and by further heating the water using heat recovery from the compressor. Of the input power to the compressor, typically about 85% was transferred to the refrigerant, 50 % by the compression work and 35% due to the heating of the refrigerant by the cylinder wall, and the remaining 15% (of the input power) was rejected to the cooling medium. The evaporator effectiveness was found to be about 75% and sensitive to the air speed. Using the data collected, a steady state computer model was developed. For given input conditions s air inlet temperature, air speed, the degree of suction superheat , water inlet and outlet temperatures; the model is capable of predicting the refrigerant cycle, compressor efficiency, evaporator effectiveness, condenser water flow rate and system Cop.
Resumo:
For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.
River basin surveillance using remotely sensed data: a water resources information management system
Resumo:
This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.
Resumo:
Previous research has indicated that schematic eyes incorporating aspheric surfaces but lacking gradient index are unable to model ocular spherical aberration and peripheral astigmatism simultaneously. This limits their use as wide-angle schematic eyes. This thesis challenges this assumption by investigating the flexibility of schematic eyes comprising aspheric optical surfaces and homogeneous optical media. The full variation of ocular component dimensions found in human eyes was established from the literature. Schematic eye parameter variants were limited to these dimensions. The levels of spherical aberration and peripheral astigmatism modelled by these schematic eyes were compared to the range of measured levels. These were also established from the literature. To simplify comparison of modelled and measured data, single value parameters were introduced; the spherical aberration function (SAF), and peripheral astigmatism function (PAF). Some ocular components variations produced a wide range of aberrations without exceeding the limits of human ocular components. The effect of ocular component variations on coma was also investigated, but no comparison could be made as no empirical data exists. It was demonstrated that by combined manipulation of a number of parameters in the schematic eyes it was possible to model all levels of ocular spherical aberration and peripheral astigmatism. However, the unique parameters of a human eye could not be obtained in this way, as a number of models could be used to produce the same spherical aberration and peripheral astigmatism, while giving very different coma levels. It was concluded that these schematic eyes are flexible enough to model the monochromatic aberrations tested, the absence of gradient index being compensated for by altering the asphericity of one or more surfaces.
Resumo:
This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.
Resumo:
We present experimental studies and numerical modeling based on a combination of the Bidirectional Beam Propagation Method and Finite Element Modeling that completely describes the wavelength spectra of point by point femtosecond laser inscribed fiber Bragg gratings, showing excellent agreement with experiment. We have investigated the dependence of different spectral parameters such as insertion loss, all dominant cladding and ghost modes and their shape relative to the position of the fiber Bragg grating in the core of the fiber. Our model is validated by comparing model predictions with experimental data and allows for predictive modeling of the gratings. We expand our analysis to more complicated structures, where we introduce symmetry breaking; this highlights the importance of centered gratings and how maintaining symmetry contributes to the overall spectral quality of the inscribed Bragg gratings. Finally, the numerical modeling is applied to superstructure gratings and a comparison with experimental results reveals a capability for dealing with complex grating structures that can be designed with particular wavelength characteristics.
Resumo:
This thesis reports the results of DEM (Discrete Element Method) simulations of rotating drums operated in a number of different flow regimes. DEM simulations of drum granulation have also been conducted. The aim was to demonstrate that a realistic simulation is possible, and further understanding of the particle motion and granulation processes in a rotating drum. The simulation model has shown good qualitative and quantitative agreement with other published experimental results. A two-dimensional bed of 5000 disc particles, with properties similar to glass has been simulated in the rolling mode (Froude number 0.0076) with a fractional drum fill of approximately 30%. Particle velocity fields in the cascading layer, bed cross-section, and at the drum wall have shown good agreement with experimental PEPT data. Particle avalanches in the cascading layer have been shown to be consistent with single layers of particles cascading down the free surface towards the drum wall. Particle slip at the drum wall has been shown to depend on angular position, and ranged from 20% at the toe and shoulder, to less than 1% at the mid-point. Three-dimensional DEM simulations of a moderately cascading bed of 50,000 spherical elastic particles (Froude number 0.83) with a fractional fill of approximately 30% have also been performed. The drum axis was inclined by 50 to the horizontal with periodic boundaries at the ends of the drum. The mean period of bed circulation was found to be 0.28s. A liquid binder was added to the system using a spray model based on the concept of a wet surface energy. Granule formation and breakage processes have been demonstrated in the system.
Resumo:
This thesis investigates the modelling of drying processes for the promotion of market-led Demand Side Management (DSM) as applied to the UK Public Electricity Suppliers. A review of DSM in the electricity supply industry is provided, together with a discussion of the relevant drivers supporting market-led DSM and energy services (ES). The potential opportunities for ES in a fully deregulated energy market are outlined. It is suggested that targeted industrial sector energy efficiency schemes offer significant opportunity for long term customer and supplier benefit. On a process level, industrial drying is highlighted as offering significant scope for the application of energy services. Drying is an energy-intensive process used widely throughout industry. The results of an energy survey suggest that 17.7 per cent of total UK industrial energy use derives from drying processes. Comparison with published work indicates that energy use for drying shows an increasing trend against a background of reducing overall industrial energy use. Airless drying is highlighted as offering potential energy saving and production benefits to industry. To this end, a comprehensive review of the novel airless drying technology and its background theory is made. Advantages and disadvantages of airless operation are defined and the limited market penetration of airless drying is identified, as are the key opportunities for energy saving. Limited literature has been found which details the modelling of energy use for airless drying. A review of drying theory and previous modelling work is made in an attempt to model energy consumption for drying processes. The history of drying models is presented as well as a discussion of the different approaches taken and their relative merits. The viability of deriving energy use from empirical drying data is examined. Adaptive neuro fuzzy inference systems (ANFIS) are successfully applied to the modelling of drying rates for 3 drying technologies, namely convective air, heat pump and airless drying. The ANFIS systems are then integrated into a novel energy services model for the prediction of relative drying times, energy cost and atmospheric carbon dioxide emission levels. The author believes that this work constitutes the first to use fuzzy systems for the modelling of drying performance as an energy services approach to DSM. To gain an insight into the 'real world' use of energy for drying, this thesis presents a unique first-order energy audit of every ceramic sanitaryware manufacturing site in the UK. Previously unknown patterns of energy use are highlighted. Supplementary comments on the timing and use of drying systems are also made. The limitations of such large scope energy surveys are discussed.
Resumo:
Drying is an important unit operation in process industry. Results have suggested that the energy used for drying has increased from 12% in 1978 to 18% of the total energy used in 1990. A literature survey of previous studies regarding overall drying energy consumption has demonstrated that there is little continuity of methods and energy trends could not be established. In the ceramics, timber and paper industrial sectors specific energy consumption and energy trends have been investigated by auditing drying equipment. Ceramic products examined have included tableware, tiles, sanitaryware, electrical ceramics, plasterboard, refractories, bricks and abrasives. Data from industry has shown that drying energy has not varied significantly in the ceramics sector over the last decade, representing about 31% of the total energy consumed. Information from the timber industry has established that radical changes have occurred over the last 20 years, both in terms of equipment and energy utilisation. The energy efficiency of hardwood drying has improved by 15% since the 1970s, although no significant savings have been realised for softwood. A survey estimating the energy efficiency and operating characteristics of 192 paper dryer sections has been conducted. Drying energy was found to increase to nearly 60% of the total energy used in the early 1980s, but has fallen over the last decade, representing 23% of the total in 1993. These results have demonstrated that effective energy saving measures, such as improved pressing and heat recovery, have been successfully implemented since the 1970s. Artificial neural networks have successfully been applied to model process characteristics of microwave and convective drying of paper coated gypsum cove. Parameters modelled have included product moisture loss, core gypsum temperature and quality factors relating to paper burning and bubbling defects. Evaluation of thermal and dielectric properties have highlighted gypsum's heat sensitive characteristics in convective and electromagnetic regimes. Modelling experimental data has shown that the networks were capable of simulating drying process characteristics to a high degree of accuracy. Product weight and temperature were predicted to within 0.5% and 5C of the target data respectively. Furthermore, it was demonstrated that the underlying properties of the data could be predicted through a high level of input noise.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
This thesis presents an approach to cutting dynamics during turning based upon the mechanism of deformation of work material around the tool nose known as "ploughing". Starting from the shearing process in the cutting zone and accounting for "ploughing", new mathematical models relating turning force components to cutting conditions, tool geometry and tool vibration are developed. These models are developed separately for steady state and for oscillatory turning with new and worn tools. Experimental results are used to determine mathematical functions expressing the parameters introduced by the steady state model in the case of a new tool. The form of these functions are of general validity though their coefficients are dependent on work and tool materials. Good agreement is achieved between experimental and predicted forces. The model is extended on one hand to include different work material by introducing a hardness factor. The model provides good predictions when predicted forces are compared to present and published experimental results. On the other hand, the extension of the ploughing model to taming with a worn edge showed the ability of the model in predicting machining forces during steady state turning with the worn flank of the tool. In the development of the dynamic models, the dynamic turning force equations define the cutting process as being a system for which vibration of the tool tip in the feed direction is the input and measured forces are the output The model takes into account the shear plane oscillation and the cutting configuration variation in response to tool motion. Theoretical expressions of the turning forces are obtained for new and worn cutting edges. The dynamic analysis revealed the interaction between the cutting mechanism and the machine tool structure. The effect of the machine tool and tool post is accounted for by using experimental data of the transfer function of the tool post system. Steady state coefficients are corrected to include the changes in the cutting configuration with tool vibration and are used in the dynamic model. A series of oscillatory cutting tests at various conditions and various tool flank wear levels are carried out and experimental results are compared with model—predicted forces. Good agreement between predictions and experiments were achieved over a wide range of cutting conditions. This research bridges the gap between the analysis of vibration and turning forces in turning. It offers an explicit expression of the dynamic turning force generated during machining and highlights the relationships between tool wear, tool vibration and turning force. Spectral analysis of tool acceleration and turning force components led to define an "Inertance Power Ratio" as a flank wear monitoring factor. A formulation of an on—line flank wear monitoring methodology is presented and shows how the results of the present model can be applied to practical in—process tool wear monitoring in • turning operations.