135 resultados para optimisation algorithms
Resumo:
Meadowsweet was extracted in water at a range of temperatures (60–100 °C), and the total phenols, tannins, quercetin, salicylic acid content and colour were analysed. The extraction of total phenols followed pseudo first-order kinetics, the rate constant (k) increased from 0.09 ± 0.02 min−1 to 0.44 ± 0.09 min−1, as the temperature increased from 60 to 100 °C. An increase in temperature from 60 to 100 °C increased the concentration of total phenols extracted from 39 ± 2 to 63 ± 3 mg g−1 gallic acid equivalents, although it did not significantly affect the proportion of tannin and non-tannin fractions. The extraction of quercetin and salicyclic acid from meadowsweet also followed pseudo first-order kinetics, the rate constant of both compounds increasing with an increase in temperature up until 90 °C. Therefore, the aqueous extraction of meadowsweet at temperatures at or above 90 °C for 15 min yields extracts high in phenols, which may be added to beverages.
Resumo:
The total phenols, apigenin 7-glucoside, turbidity and colour of extracts from dried chamomile flowers were studied with a view to develop chamomile extracts with potential anti-inflammatory properties for incorporation into beverages. The extraction of all constituents followed pseudo first-order kinetics. In general, the rate constant (k) increased as the temperature increased from 57 to 100 °C. The turbidity only increased significantly between 90 and 100 °C. Therefore, aqueous chamomile extracts had maximum total phenol concentration and minimum turbidity when extracted at 90 °C for 20 min. The effect of drying conditions on chamomile extracted using these conditions was determined. A significant reduction in phenol concentration, from 19.7 ± 0.5 mg/g GAE in fresh chamomile to 13 ± 1 mg/g GAE, was found only in the plant material oven-dried at 80 °C (p ⩽ 0.05). The biggest colour change was between fresh chamomile and that oven-dried at 80 °C, followed by samples air-dried. There was no significant difference in colour of material freeze-dried and oven-dried at 40 °C.
Resumo:
This study examines the numerical accuracy, computational cost, and memory requirements of self-consistent field theory (SCFT) calculations when the diffusion equations are solved with various pseudo-spectral methods and the mean field equations are iterated with Anderson mixing. The different methods are tested on the triply-periodic gyroid and spherical phases of a diblock-copolymer melt over a range of intermediate segregations. Anderson mixing is found to be somewhat less effective than when combined with the full-spectral method, but it nevertheless functions admirably well provided that a large number of histories is used. Of the different pseudo-spectral algorithms, the 4th-order one of Ranjan, Qin and Morse performs best, although not quite as efficiently as the full-spectral method.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
In estimating the inputs into the Modern Portfolio Theory (MPT) portfolio optimisation problem, it is usual to use equal weighted historic data. Equal weighting of the data, however, does not take account of the current state of the market. Consequently this approach is unlikely to perform well in any subsequent period as the data is still reflecting market conditions that are no longer valid. The need for some return-weighting scheme that gives greater weight to the most recent data would seem desirable. Therefore, this study uses returns data which are weighted to give greater weight to the most recent observations to see if such a weighting scheme can offer improved ex-ante performance over that based on un-weighted data.
Resumo:
This document contains a report on the work done under the ESA/Ariadna study 06/4101 on the global optimization of space trajectories with multiple gravity assist (GA) and deep space manoeuvres (DSM). The study was performed by a joint team of scientists from the University of Reading and the University of Glasgow.
Resumo:
Some points of the paper by N.K. Nichols (see ibid., vol.AC-31, p.643-5, 1986), concerning the robust pole assignment of linear multiinput systems, are clarified. It is stressed that the minimization of the condition number of the closed-loop eigenvector matrix does not necessarily lead to robustness of the pole assignment. It is shown why the computational method, which Nichols claims is robust, is in fact numerically unstable with respect to the determination of the gain matrix. In replying, Nichols presents arguments to support the choice of the conditioning of the closed-loop poles as a measure of robustness and to show that the methods of J Kautsky, N. K. Nichols and P. VanDooren (1985) are stable in the sense that they produce accurate solutions to well-conditioned problems.
Resumo:
A number of computationally reliable direct methods for pole assignment by feedback have recently been developed. These direct procedures do not necessarily produce robust solutions to the problem, however, in the sense that the assigned poles are insensitive to perturbalions in the closed-loop system. This difficulty is illustrated here with results from a recent algorithm presented in this TRANSACTIONS and its causes are examined. A measure of robustness is described, and techniques for testing and improving robustness are indicated.
Resumo:
The solution of the pole assignment problem by feedback in singular systems is parameterized and conditions are given which guarantee the regularity and maximal degree of the closed loop pencil. A robustness measure is defined, and numerical procedures are described for selecting the free parameters in the feedback to give optimal robustness.
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
A focused library of potential hydrogelators each containing two substituted aromatic residues separated by a urea or thiourea linkage have been synthesised and characterized. Six of these novel compounds are highly efficient hydrogelators, forming gels in aqueous solution at low concentrations (0.03–0.60 wt %). Gels were formed through a pH switching methodology, by acidification of a basic solution (pH 14 to ≈4) either by addition of HCl or via the slow hydrolysis of glucono-δ-lactone. Frequently, gelation was accompanied by a dramatic switch in the absorption spectra of the gelators, resulting in a significant change in colour, typically from a vibrant orange to pale yellow. Each of the gels was capable of sequestering significant quantities of the aromatic cationic dye, methylene blue, from aqueous solution (up to 1.02 g of dye per gram of dry gelator). Cryo-transmission electron microscopy of two of the gels revealed an extensive network of high aspect ratio fibers. The structure of the fibers altered dramatically upon addition of 20 wt % of the dye, resulting in aggregation and significant shortening of the fibrils. This study demonstrates the feasibility for these novel gels finding application as inexpensive and effective water purification platforms.