47 resultados para genetic algorithm-kernel partial least squares


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an approach for optimal design of a fully regenerative dynamic dynamometer using genetic algorithms. The proposed dynamometer system includes an energy storage mechanism to adaptively absorb the energy variations following the dynamometer transients. This allows the minimum power electronics requirement at the mains power supply grid to compensate for the losses. The overall dynamometer system is a dynamic complex system and design of the system is a multi-objective problem, which requires advanced optimisation techniques such as genetic algorithms. The case study of designing and simulation of the dynamometer system indicates that the genetic algorithm based approach is able to locate a best available solution in view of system performance and computational costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

T cells recognize peptide epitopes bound to major histocompatibility complex molecules. Human T-cell epitopes have diagnostic and therapeutic applications in autoimmune diseases. However, their accurate definition within an autoantigen by T-cell bioassay, usually proliferation, involves many costly peptides and a large amount of blood, We have therefore developed a strategy to predict T-cell epitopes and applied it to tyrosine phosphatase IA-2, an autoantigen in IDDM, and HLA-DR4(*0401). First, the binding of synthetic overlapping peptides encompassing IA-2 was measured directly to purified DR4. Secondly, a large amount of HLA-DR4 binding data were analysed by alignment using a genetic algorithm and were used to train an artificial neural network to predict the affinity of binding. This bioinformatic prediction method was then validated experimentally and used to predict DR4 binding peptides in IA-2. The binding set encompassed 85% of experimentally determined T-cell epitopes. Both the experimental and bioinformatic methods had high negative predictive values, 92% and 95%, indicating that this strategy of combining experimental results with computer modelling should lead to a significant reduction in the amount of blood and the number of peptides required to define T-cell epitopes in humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Residence time distribution studies of gas through a rotating drum bioreactor for solid-state fermentation were performed using carbon monoxide as a tracer gas. The exit concentration as a function of time differed considerably from profiles expected for plug flow, plug flow with axial dispersion, and continuous stirred tank reactor (CSTR) models. The data were then fitted by least-squares analysis to mathematical models describing a central plug flow region surrounded by either one dead region (a three-parameter model) or two dead regions (a five-parameter model). Model parameters were the dispersion coefficient in the central plug flow region, the volumes of the dead regions, and the exchange rates between the different regions. The superficial velocity of the gas through the reactor has a large effect on parameter values. Increased superficial velocity tends to decrease dead region volumes, interregion transfer rates, and axial dispersion. The significant deviation from CSTR, plug flow, and plug flow with axial dispersion of the residence time distribution of gas within small-scale reactors can lead to underestimation of the calculation of mass and heat transfer coefficients and hence has implications for reactor design and scaleup. (C) 2001 John Wiley & Sons, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the basis of a spatially distributed sediment budget across a large basin, costs of achieving certain sediment reduction targets in rivers were estimated. A range of investment prioritization scenarios were tested to identify the most cost-effective strategy to control suspended sediment loads. The scenarios were based on successively introducing more information from the sediment budget. The relationship between spatial heterogeneity of contributing sediment sources on cost effectiveness of prioritization was investigated. Cost effectiveness was shown to increase with sequential introduction of sediment budget terms. The solution which most decreased cost was achieved by including spatial information linking sediment sources to the downstream target location. This solution produced cost curves similar to those derived using a genetic algorithm formulation. Appropriate investment prioritization can offer large cost savings because the magnitude of the costs can vary by several times depending on what type of erosion source or sediment delivery mechanism is targeted. Target settings which only consider the erosion source rates can potentially result in spending more money than random management intervention for achieving downstream targets. Coherent spatial patterns of contributing sediment emerge from the budget model and its many inputs. The heterogeneity in these patterns can be summarized in a succinct form. This summary was shown to be consistent with the cost difference between local and regional prioritization for three of four test catchments. To explain the effect for the fourth catchment, the detail of the individual sediment sources needed to be taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of physiological understanding in improving the efficiency of breeding programs is examined largely from the perspective of conventional breeding programs. Impact of physiological research to date on breeding programs, and the nature of that research, was assessed from (i) responses to a questionnaire distributed to plant breeders and physiologists, and (ii) a survey of literature abstracts. Ways to better utilise physiological understanding for improving breeding programs are suggested, together with possible constraints to delivering beneficial outcomes. Responses from the questionnaire indicated a general view that the contribution by crop physiology to date has been modest. However, most of those surveyed expected the contribution to be larger in the next 20 years. Some constraints to progress perceived by breeders and physiologists were highlighted. The survey of literature abstracts indicated that from a plant breeding perspective, much physiological research is not progressing further than making suggestions about possible approaches to selection. There was limited evidence in the literature of objective comparison of such suggestions with existing methodology, or of development and application of these within active breeding programs. It is argued in this paper that the development of outputs from physiological research for breeding requires a good understanding of the breeding program(s) being serviced and factors affecting its performance. Simple quantitative genetic models, or at least the ideas they represent, should be considered in conducting physiological research and in envisaging and evaluating outputs. The key steps of a generalised breeding program are outlined, and the potential pathways for physiological understanding to impact on these steps are discussed. Impact on breeding programs may arise through (i) better choice of environments in which to conduct selection trials, (ii) identification of selection criteria and traits for focused introgression programs, and (iii) identifying traits for indirect selection criteria as an adjunct to criteria already used. While many breeders and physiologists apparently recognise that physiological understanding may have a major role in the first area, there appears to be relatively Little research activity targeting this issue, and a corresponding bias, arguably unjustified, toward examining traits for indirect selection. Furthermore, research on traits aimed at crop improvement is often deficient because key genetic parameters, such as genetic variation in relevant breeding populations and genetic (as opposed to phenotypic) correlations with yield or other characters of economic importance, are not properly considered in the research. Some areas requiring special attention for successfully interfacing physiology research with breeding are discussed. These include (i) the need to work with relevant genetic populations, (ii) close integration of the physiological research with an active breeding program, and (iii) the dangers of a pre-defined or narrow focus in the physiological research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microwave and thermal cure processes for the epoxy-amine systems N,N,N',N'-tetraglycidyl-4,4'-diaminodiphenyl methane (TGDDM) with diaminodiphenyl sulfone (DDS) and diaminodiphenyl methane (DDM) have been investigated. The DDS system was studied at a single cure temperature of 433 K and a single stoichiometry of 27 wt% and the DDM system was studied at two stoichiometries, 19 and 32 wt%, and a range temperatures between 373 and 413 K. The best values the kinetic rate parameters for the consumption of amines have been determined by a least squares curve Ft to a model for epoxy-amine cure. The activation energies for the rate parameters for the MY721/DDM system were determined as was the overall activation energy for the cure reaction which was found to be 62 kJ mol(-1). No evidence was found for any specific effect of the microwave radiation on the rate parameters, and the systems were both found to be characterized by a negative substitution effect. Copyright (C) 2001 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the trade relationship between the Gulf Cooperation Council (GCC) and the European Union (EU). A simultaneous equation regression model is developed and estimated to assist with the analysis. The regression results, using both the two stage least squares (2SLS) and ordinary least squares (OLS) estimation methods, reveal the existence of feedback effects between the two economic integrations. The results also show that during times of slack in oil prices, the GCC income from its investments overseas helped to finance its imports from the EU.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bulk free radical copolymerization of 2-hydroxyethyl methacrylate (HEMA) with N-vinyl-2-pyrrolidone (VP) was carried out to low conversions at 50 degreesC, using benzoyl peroxide (BPO) as initiator. The compositions of the copolymers; were determined using C-13 NMR spectroscopy. The conversion of monomers to polymers was studied using FT-NIR spectroscopy in order to predict the extent of conversion of monomer to polymer. From model fits to the composition data, a statistical F-test revealed that die penultimate model describes die copolymerization better than die terminal model. Reactivity ratios were calculated by using a non-linear least squares analysis (NLLS) and r(H) = 8.18 and r(V) = 0.097 were found to be the best fit values of the reactivity ratios for the terminal model and r(HH) = 12.0, r(VH) = 2.20, r(VV) = 0.12 and r(HV) = 0.03 for the penultimate model. Predictions were made for changes in compositions as a function of conversion based upon the terminal and penultimate models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microwave and thermal cure processes for the epoxy-amine systems (epoxy resin diglycidyl ether of bisphenol A, DGEBA) with 4,4'-diaminodiphenyl sulphone (DDS) and 4,4'-diaminodiphenyl methane (DDM) have been investigated for 1:1 stoichiometries by using fiber-optic FT-NIR spectroscopy. The DGEBA used was in the form of Ciba-Geigy GY260 resin. The DDM system was studied at a single cure temperature of 373 K and a single stoichiometry of 20.94 wt% and the DDS system was studied at a stoichiometry of 24.9 wt% and a range of temperatures between 393 and 443 K. The best values of the kinetic rate parameters for the consumption of amines have been determined by a least squares curve fit to a model for epoxy/amine cure. The activation energies for the polymerization of the DGEBA/DDS system were determined for both cure processes and found to be 66 and 69 kJ mol(-1) for the microwave and thermal cure processes, respectively. No evidence was found for any specific effect of the microwave radiation on the rate parameters, and the systems were both found to be characterized by a negative substitution effect. Copyright (C) 2002 John Wiley Sons, Ltd.