137 resultados para Meta-heuristics algorithms
Resumo:
The present paper presents a meta-analysis of the economic and agronomic performance of genetically modified (GM) crops worldwide. Bayesian, classical and non-parametric approaches were used to evaluate the performance of GM crops v. their conventional counterparts. The two main GM crop traits (herbicide tolerant (HT) and insect resistant (Bt)) and three of the main GM crops produced worldwide (Bt cotton, HT soybean and Bt maize) were analysed in terms of yield, production cost and gross margin. The scope of the analysis covers developing and developed countries, six world regions, and all countries combined. Results from the statistical analyses indicate that GM crops perform better than their conventional counterparts in agronomic and economic (gross margin) terms. Regarding countries’ level of development, GM crops tend to perform better in developing countries than in developed countries, with Bt cotton being the most profitable crop grown.
Resumo:
This study examines the numerical accuracy, computational cost, and memory requirements of self-consistent field theory (SCFT) calculations when the diffusion equations are solved with various pseudo-spectral methods and the mean field equations are iterated with Anderson mixing. The different methods are tested on the triply-periodic gyroid and spherical phases of a diblock-copolymer melt over a range of intermediate segregations. Anderson mixing is found to be somewhat less effective than when combined with the full-spectral method, but it nevertheless functions admirably well provided that a large number of histories is used. Of the different pseudo-spectral algorithms, the 4th-order one of Ranjan, Qin and Morse performs best, although not quite as efficiently as the full-spectral method.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
This paper provides evidence regarding the risk-adjusted performance of 19 UK real estate funds in the UK, over the period 1991-2001. Using Jensen’s alpha the results are generally favourable towards the hypothesis that real estate fund managers showed superior risk-adjusted performance over this period. However, using three widely known parametric statistical procedures to jointly test for timing and selection ability the results are less conclusive. The paper then utilises the meta-analysis technique to further examine the regression results in an attempt to estimate the proportion of variation in results attributable to sampling error. The meta-analysis results reveal strong evidence, across all models, that the variation in findings is real and may not be attributed to sampling error. Thus, the meta-analysis results provide strong evidence that on average the sample of real estate funds analysed in this study delivered significant risk-adjusted performance over this period. The meta-analysis for the three timing and selection models strongly indicating that this out performance of the benchmark resulted from superior selection ability, while the evidence for the ability of real estate fund managers to time the market is at best weak. Thus, we can say that although real estate fund managers are unable to outperform a passive buy and hold strategy through timing, they are able to improve their risk-adjusted performance through selection ability.
Resumo:
Some points of the paper by N.K. Nichols (see ibid., vol.AC-31, p.643-5, 1986), concerning the robust pole assignment of linear multiinput systems, are clarified. It is stressed that the minimization of the condition number of the closed-loop eigenvector matrix does not necessarily lead to robustness of the pole assignment. It is shown why the computational method, which Nichols claims is robust, is in fact numerically unstable with respect to the determination of the gain matrix. In replying, Nichols presents arguments to support the choice of the conditioning of the closed-loop poles as a measure of robustness and to show that the methods of J Kautsky, N. K. Nichols and P. VanDooren (1985) are stable in the sense that they produce accurate solutions to well-conditioned problems.
Resumo:
A number of computationally reliable direct methods for pole assignment by feedback have recently been developed. These direct procedures do not necessarily produce robust solutions to the problem, however, in the sense that the assigned poles are insensitive to perturbalions in the closed-loop system. This difficulty is illustrated here with results from a recent algorithm presented in this TRANSACTIONS and its causes are examined. A measure of robustness is described, and techniques for testing and improving robustness are indicated.
Resumo:
The solution of the pole assignment problem by feedback in singular systems is parameterized and conditions are given which guarantee the regularity and maximal degree of the closed loop pencil. A robustness measure is defined, and numerical procedures are described for selecting the free parameters in the feedback to give optimal robustness.
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
Purpose: To review perceived emotional well-being in older people with visual impairment and perceived factors that inhibit/facilitate psychosocial adjustment to vision loss. Method: The databases of MEDLINE, EMBASE, PsycINFO and CINAHL were searched for studies published from January 1980 to December 2010, which recruited older people with irreversible vision loss, and used qualitative methods for both data collection and analysis. Results sections of the papers were synthesised using a thematic-style analysis to identify the emergent and dominant themes. Results: Seventeen qualitative papers were included in the review, and five main themes emerged from the synthesis: 1) the trauma of an ophthalmic diagnosis, 2) impact of vision loss on daily life, 3) negative impact of visual impairment on psychosocial well-being, 4) factors that inhibit social well-being, and 5) factors that facilitate psychological well-being. We found the response shift model useful for explaining our synthesis. Conclusions: Acquired visual impairment can have a significant impact on older people's well-being and make psychosocial adjustment to the condition a major challenge. Acceptance of the condition and a positive attitude facilitate successful psychosocial adjustment to vision loss as well as social support from family, friends and peers who have successfully adjusted to the condition. [Box: see text].
Resumo:
This paper provides a comprehensive quantitative review of high quality randomized controlled trials of psychological therapies for anxiety disorders in children and young people. Using a systematic search for randomized controlled trials which included a control condition and reported data suitable for meta-analysis, 55 studies were included. Eligible studies were rated for methodological quality and outcome data were extracted and analyzed using standard methods. Trial quality was variable, many studies were underpowered and adverse effects were rarely assessed; however, quality ratings were higher for more recently published studies. Most trials evaluated cognitive behavior therapy or behavior therapy and most recruited both children and adolescents. Psychological therapy for anxiety in children and young people was moderately effective overall, but effect sizes were small to medium when psychological therapy was compared to an active control condition. The effect size for non-CBT interventions was not significant. Parental involvement in therapy was not associated with differential effectiveness. Treatment targeted at specific anxiety disorders, individual psychotherapy, and psychotherapy with older children and adolescents had effect sizes which were larger than effect sizes for treatments targeting a range of anxiety disorders, group psychotherapy, and psychotherapy with younger children. Few studies included an effective follow-up. Future studies should follow CONSORT reporting standards, be adequately powered, and assess follow-up. Research trials are unlikely to address all important clinical questions around treatment delivery. Thus, careful assessment and formulation will remain an essential part of successful psychological treatment for anxiety in children and young people.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
Providing homeowners with real-time feedback on their electricity consumption through a dedicated display device has been shown to reduce consumption by approximately 6-10%. However, recent advances in smart grid technology have enabled larger sample sizes and more representative sample selection and recruitment methods for display trials. By analyzing these factors using data from current studies, this paper argues that a realistic, large-scale conservation effect from feedback is in the range of 3-5%. Subsequent analysis shows that providing real-time feedback may not be a cost effective strategy for reducing carbon emissions in Australia, but that it may enable additional benefits such as customer retention and peak-load shift.
Resumo:
Data assimilation algorithms are a crucial part of operational systems in numerical weather prediction, hydrology and climate science, but are also important for dynamical reconstruction in medical applications and quality control for manufacturing processes. Usually, a variety of diverse measurement data are employed to determine the state of the atmosphere or to a wider system including land and oceans. Modern data assimilation systems use more and more remote sensing data, in particular radiances measured by satellites, radar data and integrated water vapor measurements via GPS/GNSS signals. The inversion of some of these measurements are ill-posed in the classical sense, i.e. the inverse of the operator H which maps the state onto the data is unbounded. In this case, the use of such data can lead to significant instabilities of data assimilation algorithms. The goal of this work is to provide a rigorous mathematical analysis of the instability of well-known data assimilation methods. Here, we will restrict our attention to particular linear systems, in which the instability can be explicitly analyzed. We investigate the three-dimensional variational assimilation and four-dimensional variational assimilation. A theory for the instability is developed using the classical theory of ill-posed problems in a Banach space framework. Further, we demonstrate by numerical examples that instabilities can and will occur, including an example from dynamic magnetic tomography.