840 resultados para hybrid prediction method
Resumo:
In the analysis and prediction of many real-world time series, the assumption of stationarity is not valid. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We introduce a new model which combines a dynamic switching (controlled by a hidden Markov model) and a non-linear dynamical system. We show how to train this hybrid model in a maximum likelihood approach and evaluate its performance on both synthetic and financial data.
Resumo:
The distribution of finished products from depots to customers is a practical and challenging problem in logistics management. Better routing and scheduling decisions can result in higher level of customer satisfaction because more customers can be served in a shorter time. The distribution problem is generally formulated as the vehicle routing problem (VRP). Nevertheless, there is a rigid assumption that there is only one depot. In cases, for instance, where a logistics company has more than one depot, the VRP is not suitable. To resolve this limitation, this paper focuses on the VRP with multiple depots, or multi-depot VRP (MDVRP). The MDVRP is NP-hard, which means that an efficient algorithm for solving the problem to optimality is unavailable. To deal with the problem efficiently, two hybrid genetic algorithms (HGAs) are developed in this paper. The major difference between the HGAs is that the initial solutions are generated randomly in HGA1. The Clarke and Wright saving method and the nearest neighbor heuristic are incorporated into HGA2 for the initialization procedure. A computational study is carried out to compare the algorithms with different problem sizes. It is proved that the performance of HGA2 is superior to that of HGA1 in terms of the total delivery time.
Resumo:
Jackson (2005) developed a hybrid model of personality and learning, known as the learning styles profiler (LSP) which was designed to span biological, socio-cognitive, and experiential research foci of personality and learning research. The hybrid model argues that functional and dysfunctional learning outcomes can be best understood in terms of how cognitions and experiences control, discipline, and re-express the biologically based scale of sensation-seeking. In two studies with part-time workers undertaking tertiary education (N=137 and 58), established models of approach and avoidance from each of the three different research foci were compared with Jackson's hybrid model in their predictiveness of leadership, work, and university outcomes using self-report and supervisor ratings. Results showed that the hybrid model was generally optimal and, as hypothesized, that goal orientation was a mediator of sensation-seeking on outcomes (work performance, university performance, leader behaviours, and counterproductive work behaviour). Our studies suggest that the hybrid model has considerable promise as a predictor of work and educational outcomes as well as dysfunctional outcomes.
Resumo:
The integration of a microprocessor and a medium power stepper motor in one control system brings together two quite different disciplines. Various methods of interfacing are examined and the problems involved in both hardware and software manipulation are investigated. Microprocessor open-loop control of the stepper motor is considered. The possible advantages of microprocessor closed-loop control are examined and the development of a system is detailed. The system uses position feedback to initiate each motor step. Results of the dynamic response of the system are presented and its performance discussed. Applications of the static torque characteristic of the stepper motor are considered followed by a review of methods of predicting the characteristic. This shows that accurate results are possible only when the effects of magnetic saturation are avoided or when the machine is available for magnetic circuit tests to be carried out. A new method of predicting the static torque characteristic is explained in detail. The method described uses the machine geometry and the magnetic characteristics of the iron types used in the machine. From this information the permeance of each iron component of the machine is calculated and by using the equivalent magnetic circuit of the machine, the total torque produced is predicted. It is shown how this new method is implemented on a digital computer and how the model may be used to investigate further aspects of the stepper motor in addition to the static torque.
Resumo:
A rapid method for the analysis of biomass feedstocks was established to identify the quality of the pyrolysis products likely to impact on bio-oil production. A total of 15 Lolium and Festuca grasses known to exhibit a range of Klason lignin contents were analysed by pyroprobe-GC/MS (Py-GC/MS) to determine the composition of the thermal degradation products of lignin. The identification of key marker compounds which are the derivatives of the three major lignin subunits (G, H, and S) allowed pyroprobe-GC/MS to be statistically correlated to the Klason lignin content of the biomass using the partial least-square method to produce a calibration model. Data from this multivariate modelling procedure was then applied to identify likely "key marker" ions representative of the lignin subunits from the mass spectral data. The combined total abundance of the identified key markers for the lignin subunits exhibited a linear relationship with the Klason lignin content. In addition the effect of alkali metal concentration on optimum pyrolysis characteristics was also examined. Washing of the grass samples removed approximately 70% of the metals and changed the characteristics of the thermal degradation process and products. Overall the data indicate that both the organic and inorganic specification of the biofuel impacts on the pyrolysis process and that pyroprobe-GC/MS is a suitable analytical technique to asses lignin composition. © 2007 Elsevier B.V. All rights reserved.
Resumo:
The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus), Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. Ollrien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as ?i and q? with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of assigning a disruption probability to every plasma input pattern. The second method determines the novelty of an input pattern by calculating the probability density distribution of successful plasma patterns that have been run at JET. The density distribution is represented as a mixture distribution, and its parameters arc determined using the Expectation-Maximisation method. If the dataset, used to determine the distribution parameters, covers sufficiently well the machine operational space. Then, the patterns flagged as novel can be regarded as patterns belonging to a disrupting plasma. Together with these methods, a network has been designed to predict the vertical forces, that a disruption can cause, in order to avoid that too dangerous plasma configurations are run. This network can be run before the pulse using the pre-programmed plasma configuration or on line becoming a tool that allows to stop dangerous plasma configuration. All these methods have been implemented in real time on a dual Pentium Pro based machine. The Disruption Prediction and Prevention System has shown that internal plasma parameters can be determined on-line with a good accuracy. Also the disruption detection algorithms showed promising results considering the fact that JET is an experimental machine where always new plasma configurations are tested trying to improve its performances.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
In vitro studies of drug absorption processes are undertaken to assess drug candidate or formulation suitability, mechanism investigation, and ultimately for the development of predictive models. This study included each of these approaches, with the aim of developing novel in vitro methods for inclusion in a drug absorption model. Two model analgesic drugs, ibuprofen and paracetamol, were selected. The study focused on three main areas, the interaction of the model drugs with co-administered antacids, the elucidation of the mechanisms responsible for the increased absorption rate observed in a novel paracetamol formulation and the development of novel ibuprofen tablet formulations containing alkalising excipients as dissolution promoters.Several novel dissolution methods were developed. A method to study the interaction of drug/excipient mixtures in the powder form was successfully used to select suitable dissolution enhancing exicipents. A method to study intrinsic dissolution rate using paddle apparatus was developed and used to study dissolution mechanisms. Methods to simulate stomach and intestine environments in terms of media composition and volume and drug/antacid doses were developed. Antacid addition greatly increased the dissolution of ibuprofen in the stomach model.Novel methods to measure drug permeability through rat stomach and intestine were developed, using sac methodology. The methods allowed direct comparison of the apparent permeability values obtained. Tissue stability, reproducibility and integrity was observed, with selectivity between paracellular and transcellular markers and hydrophilic and lipophilic compounds within an homologous series of beta-blockers.
Resumo:
This thesis reports the development of a reliable method for the prediction of response to electromagnetically induced vibration in large electric machines. The machines of primary interest are DC ship-propulsion motors but much of the work reported has broader significance. The investigation has involved work in five principal areas. (1) The development and use of dynamic substructuring methods. (2) The development of special elements to represent individual machine components. (3) Laboratory scale investigations to establish empirical values for properties which affect machine vibration levels. (4) Experiments on machines on the factory test-bed to provide data for correlation with prediction. (5) Reasoning with regard to the effect of various design features. The limiting factor in producing good models for machines in vibration is the time required for an analysis to take place. Dynamic substructuring methods were adopted early in the project to maximise the efficiency of the analysis. A review of existing substructure- representation and composite-structure assembly methods includes comments on which are most suitable for this application. In three appendices to the main volume methods are presented which were developed by the author to accelerate analyses. Despite significant advances in this area, the limiting factor in machine analyses is still time. The representation of individual machine components was addressed as another means by which the time required for an analysis could be reduced. This has resulted in the development of special elements which are more efficient than their finite-element counterparts. The laboratory scale experiments reported were undertaken to establish empirical values for the properties of three distinct features - lamination stacks, bolted-flange joints in rings and cylinders and the shimmed pole-yoke joint. These are central to the preparation of an accurate machine model. The theoretical methods are tested numerically and correlated with tests on two machines (running and static). A system has been devised with which the general electromagnetic forcing may be split into its most fundamental components. This is used to draw some conclusions about the probable effects of various design features.
Resumo:
This thesis describes the development of a simple and accurate method for estimating the quantity and composition of household waste arisings. The method is based on the fundamental tenet that waste arisings can be predicted from information on the demographic and socio-economic characteristics of households, thus reducing the need for the direct measurement of waste arisings to that necessary for the calibration of a prediction model. The aim of the research is twofold: firstly to investigate the generation of waste arisings at the household level, and secondly to devise a method for supplying information on waste arisings to meet the needs of waste collection and disposal authorities, policy makers at both national and European level and the manufacturers of plant and equipment for waste sorting and treatment. The research was carried out in three phases: theoretical, empirical and analytical. In the theoretical phase specific testable hypotheses were formulated concerning the process of waste generation at the household level. The empirical phase of the research involved an initial questionnaire survey of 1277 households to obtain data on their socio-economic characteristics, and the subsequent sorting of waste arisings from each of the households surveyed. The analytical phase was divided between (a) the testing of the research hypotheses by matching each household's waste against its demographic/socioeconomic characteristics (b) the development of statistical models capable of predicting the waste arisings from an individual household and (c) the development of a practical method for obtaining area-based estimates of waste arisings using readily available data from the national census. The latter method was found to represent a substantial improvement over conventional methods of waste estimation in terms of both accuracy and spatial flexibility. The research therefore represents a substantial contribution both to scientific knowledge of the process of household waste generation, and to the practical management of waste arisings.
Resumo:
This thesis describes an investigation of methods by which both repetitive and non-repetitive electrical transients in an HVDC converter station may be controlled for minimum overall cost. Several methods of inrush control are proposed and studied. The preferred method, whose development is reported in this thesis, would utilize two magnetic materials, one of which is assumed to be lossless and the other has controlled eddy-current losses. Mathematical studies are performed to assess the optimum characteristics of these materials, such that inrush current is suitably controlled for a minimum saturation flux requirement. Subsequent evaluation of the cost of hardware and capitalized losses of the proposed inrush control, indicate that a cost reduction of approximately 50% is achieved, in comparison with the inrush control hardware for the Sellindge converter station. Further mathematical studies are carried out to prove the adequacy of the proposed inrush control characteristics for controlling voltage and current transients during both repetitive and non-repetitive operating conditions. The results of these proving studies indicate that no change in the proposed characteristics is required to ensure that integrity of the thyristors is maintained.
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.
Resumo:
We propose a hybrid generative/discriminative framework for semantic parsing which combines the hidden vector state (HVS) model and the hidden Markov support vector machines (HM-SVMs). The HVS model is an extension of the basic discrete Markov model in which context is encoded as a stack-oriented state vector. The HM-SVMs combine the advantages of the hidden Markov models and the support vector machines. By employing a modified K-means clustering method, a small set of most representative sentences can be automatically selected from an un-annotated corpus. These sentences together with their abstract annotations are used to train an HVS model which could be subsequently applied on the whole corpus to generate semantic parsing results. The most confident semantic parsing results are selected to generate a fully-annotated corpus which is used to train the HM-SVMs. The proposed framework has been tested on the DARPA Communicator Data. Experimental results show that an improvement over the baseline HVS parser has been observed using the hybrid framework. When compared with the HM-SVMs trained from the fully-annotated corpus, the hybrid framework gave a comparable performance with only a small set of lightly annotated sentences. © 2008. Licensed under the Creative Commons.
Resumo:
This thesis examined solar thermal collectors for use in alternative hybrid solar-biomass power plant applications in Gujarat, India. Following a preliminary review, the cost-effective selection and design of the solar thermal field were identified as critical factors underlying the success of hybrid plants. Consequently, the existing solar thermal technologies were reviewed and ranked for use in India by means of a multi-criteria decision-making method, the Analytical Hierarchy Process (AHP). Informed by the outcome of the AHP, the thesis went on to pursue the Linear Fresnel Reflector (LFR), the design of which was optimised with the help of ray-tracing. To further enhance collector performance, LFR concepts incorporating novel mirror spacing and drive mechanisms were evaluated. Subsequently, a new variant, termed the Elevation Linear Fresnel Reflector (ELFR) was designed, constructed and tested at Aston University, UK, therefore allowing theoretical models for the performance of a solar thermal field to be verified. Based on the resulting characteristics of the LFR, and data gathered for the other hybrid system components, models of hybrid LFR- and ELFR-biomass power plants were developed and analysed in TRNSYS®. The techno-economic and environmental consequences of varying the size of the solar field in relation to the total plant capacity were modelled for a series of case studies to evaluate different applications: tri-generation (electricity, ice and heat), electricity-only generation, and process heat. The case studies also encompassed varying site locations, capacities, operational conditions and financial situations. In the case of a hybrid tri-generation plant in Gujarat, it was recommended to use an LFR solar thermal field of 14,000 m2 aperture with a 3 tonne biomass boiler, generating 815 MWh per annum of electricity for nearby villages and 12,450 tonnes of ice per annum for local fisheries and food industries. However, at the expense of a 0.3 ¢/kWh increase in levelised energy costs, the ELFR increased saving of biomass (100 t/a) and land (9 ha/a). For solar thermal applications in areas with high land cost, the ELFR reduced levelised energy costs. It was determined that off-grid hybrid plants for tri-generation were the most feasible application in India. Whereas biomass-only plants were found to be more economically viable, it was concluded that hybrid systems will soon become cost competitive and can considerably improve current energy security and biomass supply chain issues in India.
Resumo:
Purpose: Both phonological (speech) and auditory (non-speech) stimuli have been shown to predict early reading skills. However, previous studies have failed to control for the level of processing required by tasks administered across the two levels of stimuli. For example, phonological tasks typically tap explicit awareness e.g., phoneme deletion, while auditory tasks usually measure implicit awareness e.g., frequency discrimination. Therefore, the stronger predictive power of speech tasks may be due to their higher processing demands, rather than the nature of the stimuli. Method: The present study uses novel tasks that control for level of processing (isolation, repetition and deletion) across speech (phonemes and nonwords) and non-speech (tones) stimuli. 800 beginning readers at the onset of literacy tuition (mean age 4 years and 7 months) were assessed on the above tasks as well as word reading and letter-knowledge in the first part of a three time-point longitudinal study. Results: Time 1 results reveal a significantly higher association between letter-sound knowledge and all of the speech compared to non-speech tasks. Performance was better for phoneme than tone stimuli, and worse for deletion than isolation and repetition across all stimuli. Conclusions: Results are consistent with phonological accounts of reading and suggest that level of processing required by the task is less important than stimuli type in predicting the earliest stage of reading.