92 resultados para multi-objective models
em CentAUR: Central Archive University of Reading - UK
Resumo:
A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.
Resumo:
Whilst radial basis function (RBF) equalizers have been employed to combat the linear and nonlinear distortions in modern communication systems, most of them do not take into account the equalizer's generalization capability. In this paper, it is firstly proposed that the. model's generalization capability can be improved by treating the modelling problem as a multi-objective optimization (MOO) problem, with each objective based on one of several training sets. Then, as a modelling application, a new RBF equalizer learning scheme is introduced based on the directional evolutionary MOO (EMOO). Directional EMOO improves the computational efficiency of conventional EMOO, which has been widely applied in solving MOO problems, by explicitly making use of the directional information. Computer simulation demonstrates that the new scheme can be used to derive RBF equalizers with good performance not only on explaining the training samples but on predicting the unseen samples.
Resumo:
In this paper, a new equalizer learning scheme is introduced based on the algorithm of the directional evolutionary multi-objective optimization (EMOO). Whilst nonlinear channel equalizers such as the radial basis function (RBF) equalizers have been widely studied to combat the linear and nonlinear distortions in the modern communication systems, most of them do not take into account the equalizers' generalization capabilities. In this paper, equalizers are designed aiming at improving their generalization capabilities. It is proposed that this objective can be achieved by treating the equalizer design problem as a multi-objective optimization (MOO) problem, with each objective based on one of several training sets, followed by deriving equalizers with good capabilities of recovering the signals for all the training sets. Conventional EMOO which is widely applied in the MOO problems suffers from disadvantages such as slow convergence speed. Directional EMOO improves the computational efficiency of the conventional EMOO by explicitly making use of the directional information. The new equalizer learning scheme based on the directional EMOO is applied to the RBF equalizer design. Computer simulation demonstrates that the new scheme can be used to derive RBF equalizers with good generalization capabilities, i.e., good performance on predicting the unseen samples.
Resumo:
Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
A new model, RothPC-1, is described for the turnover of organic C in the top metre of soil. RothPC-1 is a version of RothC-26.3, an earlier model for the turnover of C in topsoils. In RothPC-1 two extra parameters are used to model turnover in the top metre of soil: one, p, which moves organic C down the profile by an advective process, and the other, s, which slows decomposition with depth. RothPC-1 is parameterized and tested using measurements (described in Part 1, this issue) of total organic C and radiocarbon on soil profiles from the Rothamsted long-term field experiments, collected over a period of more than 100 years. RothPC-1 gives fits to measurements of organic C and radiocarbon in the 0-23, 23-46, 46-69 and 69-92 cm layers of soil that are almost all within (or close to) measurement error in two areas of regenerating woodland (Geescroft and Broadbalk Wildernesses) and an area of cultivated land from the Broadbalk Continuous Wheat Experiment. The fits to old grassland (the Park Grass Experiment) are less close. Two other sites that provide the requisite pre- and post-bomb data are also fitted; a prairie Chernozem from Russia and an annual grassland from California. Roth-PC-1 gives a close fit to measurements of organic C and radiocarbon down the Chernozem profile, provided that allowance is made for soil age; with the annual grassland the fit is acceptable in the upper part of the profile, but not in the clay-rich Bt horizon below. Calculations suggest that treating the top metre of soil as a homogeneous unit will greatly overestimate the effects of global warming in accelerating the decomposition of soil C and hence on the enhanced release of CO2 from soil organic matter; more realistic estimates will be obtained from multi-layer models such as RothPC-1.
Resumo:
Obstacles considerably influence boundary layer processes. Their influences have been included in mesoscale models (MeM) for a long time. Methods used to parameterise obstacle effects in a MeM are summarised in this paper using results of the mesoscale model METRAS as examples. Besides the parameterisation of obstacle influences it is also possible to use a joint modelling approach to describe obstacle induced and mesoscale changes. Three different methods may be used for joint modelling approaches: The first method is a time-slice approach, where steady basic state profiles are used in an obstacle resolving microscale model (MiM, example model MITRAS) and diurnal cycles are derived by joining steady-state MITRAS results. The second joint modelling approach is one-way nesting, where the MeM results are used to initialise the MiM and to drive the boundary values of the MiM dependent on time. The third joint modelling approach is to apply multi-scale models or two-way nesting approaches, which include feedbacks from the MiM to the MeM. The advantages and disadvantages of the different approaches and remaining problems with joint Reynolds-averaged Navier–Stokes modelling approaches are summarised in the paper.
Resumo:
This study examines the relation between corporate social performance and stock returns in the UK. We closely evaluate the interactions between social and financial performance with a set of disaggregated social performance indicators for environment, employment, and community activities instead of using an aggregate measure. While scores on a composite social performance indicator are negatively related to stock returns, we find the poor financial reward offered by such firms is attributable to their good social performance on the environment and, to a lesser extent, the community aspects. Considerable abnormal returns are available from holding a portfolio of the socially least desirable stocks. These relationships between social and financial performance can be rationalized by multi-factor models for explaining the cross-sectional variation in returns, but not by industry effects.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
We have extensively evaluated the response of cloud-base drizzle rate (Rcb; mm day–1) in warm clouds to liquid water path (LWP; g m–2) and to cloud condensation nuclei (CCN) number concentration (NCCN; cm–3), an aerosol proxy. This evaluation is based on a 19-month long dataset of Doppler radar, lidar, microwave radiometers and aerosol observing systems from the Atmospheric Radiation Measurement (ARM) Mobile Facility deployments at the Azores and in Germany. Assuming 0.55% supersaturation to calculate NCCN, we found a power law , indicating that Rcb decreases by a factor of 2–3 as NCCN increases from 200 to 1000 cm–3 for fixed LWP. Additionally, the precipitation susceptibility to NCCN ranges between 0.5 and 0.9, in agreement with values from simulations and aircraft measurements. Surprisingly, the susceptibility of the probability of precipitation from our analysis is much higher than that from CloudSat estimates, but agrees well with simulations from a multi-scale high-resolution aerosol-climate model. Although scale issues are not completely resolved in the intercomparisons, our results are encouraging, suggesting that it is possible for multi-scale models to accurately simulate the response of LWP to aerosol perturbations.
Resumo:
In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere), with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly −10 to 20%, with over- and underestimates of radiative cooling at lower and higher solar zenith angle, respectively. Inter-model diversity (relative standard deviation) increases from ~10 to 15% as solar zenith angle decreases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.
Resumo:
Projections of stratospheric ozone from a suite of chemistry-climate models (CCMs) have been analyzed. In addition to a reference simulation where anthropogenic halogenated ozone depleting substances (ODSs) and greenhouse gases (GHGs) vary with time, sensitivity simulations with either ODS or GHG concentrations fixed at 1960 levels were performed to disaggregate the drivers of projected ozone changes. These simulations were also used to assess the two distinct milestones of ozone returning to historical values (ozone return dates) and ozone no longer being influenced by ODSs (full ozone recovery). The date of ozone returning to historical values does not indicate complete recovery from ODSs in most cases, because GHG-induced changes accelerate or decelerate ozone changes in many regions. In the upper stratosphere where CO2-induced stratospheric cooling increases ozone, full ozone recovery is projected to not likely have occurred by 2100 even though ozone returns to its 1980 or even 1960 levels well before (~2025 and 2040, respectively). In contrast, in the tropical lower stratosphere ozone decreases continuously from 1960 to 2100 due to projected increases in tropical upwelling, while by around 2040 it is already very likely that full recovery from the effects of ODSs has occurred, although ODS concentrations are still elevated by this date. In the midlatitude lower stratosphere the evolution differs from that in the tropics, and rather than a steady decrease in ozone, first a decrease in ozone is simulated from 1960 to 2000, which is then followed by a steady increase through the 21st century. Ozone in the midlatitude lower stratosphere returns to 1980 levels by ~2045 in the Northern Hemisphere (NH) and by ~2055 in the Southern Hemisphere (SH), and full ozone recovery is likely reached by 2100 in both hemispheres. Overall, in all regions except the tropical lower stratosphere, full ozone recovery from ODSs occurs significantly later than the return of total column ozone to its 1980 level. The latest return of total column ozone is projected to occur over Antarctica (~2045–2060) whereas it is not likely that full ozone recovery is reached by the end of the 21st century in this region. Arctic total column ozone is projected to return to 1980 levels well before polar stratospheric halogen loading does so (~2025–2030 for total column ozone, cf. 2050–2070 for Cly+60×Bry) and it is likely that full recovery of total column ozone from the effects of ODSs has occurred by ~2035. In contrast to the Antarctic, by 2100 Arctic total column ozone is projected to be above 1960 levels, but not in the fixed GHG simulation, indicating that climate change plays a significant role.