886 resultados para Optimal test set
Resumo:
In this paper we empirically investigate which are the structural characteristics that can help to predict the complexity of NK-landscape instances for estimation of distribution algorithms. To this end, we evolve instances that maximize the estimation of distribution algorithm complexity in terms of its success rate. Similarly, instances that minimize the algorithm complexity are evolved. We then identify network measures, computed from the structures of the NK-landscape instances, that have a statistically significant difference between the set of easy and hard instances. The features identified are consistently significant for different values of N and K.
Resumo:
This paper sets out to assess the workability of the regulation currently in force in the European anchovy fishery of the VIII division. Particular attention is paid to the importance of the institutional regime in the allocation of natural resources. The study uses a bio-economic approach and takes into account the fact that, not only the European Union and the individual countries involved, but also some of the resource users or appropriators intervene in its management. In order to compare the effectiveness of the rules which, at the various levels, have been set up to restrict exploitation of the resource, the anchovy fishery is simulated in two extreme situations: open access and sole ownership. The results obtained by effective management will then be contrasted with those obtained from the maximum and zero profit objectives related with the two above-mentioned scenarios. Thus, if the real data come close to those derived from the sole ownership model it will have to be acknowledged that the rules at present in force are optimal. If, on the other hand, the situation more closely approach the results obtained from the open access model, we will endeavour in our conclusions to provide suggestions for economic policy measures that might improve the situation in the fishery.
Resumo:
The implementation of various types of marine protected areas is one of several management tools available for conserving representative examples of the biological diversity within marine ecosystems in general and National Marine Sanctuaries in particular. However, deciding where and how many sites to establish within a given area is frequently hampered by incomplete knowledge of the distribution of organisms and an understanding of the potential tradeoffs that would allow planners to address frequently competing interests in an objective manner. Fortunately, this is beginning to change. Recent studies on the continental shelf of the northeastern United States suggest that substrate and water mass characteristics are highly correlated with the composition of benthic communities and may therefore, serve as proxies for the distribution of biological biodiversity. A detailed geo-referenced interpretative map of major sediment types within Stellwagen Bank National Marine Sanctuary (SBNMS) has recently been developed, and computer-aided decision support tools have reached new levels of sophistication. We demonstrate the use of simulated annealing, a type of mathematical optimization, to identify suites of potential conservation sites within SBNMS that equally represent 1) all major sediment types and 2) derived habitat types based on both sediment and depth in the smallest amount of space. The Sanctuary was divided into 3610 0.5 min2 sampling units. Simulations incorporated constraints on the physical dispersion of sampling units to varying degrees such that solutions included between one and four site clusters. Target representation goals were set at 5, 10, 15, 20, and 25 percent of each sediment type, and 10 and 20 percent of each habitat type. Simulations consisted of 100 runs, from which we identified the best solution (i.e., smallest total area) and four nearoptimal alternates. We also plotted total instances in which each sampling unit occurred in solution sets of the 100 runs as a means of gauging the variety of spatial configurations available under each scenario. Results suggested that the total combined area needed to represent each of the sediment types in equal proportions was equal to the percent representation level sought. Slightly larger areas were required to represent all habitat types at the same representation levels. Total boundary length increased in direct proportion to the number of sites at all levels of representation for simulations involving sediment and habitat classes, but increased more rapidly with number of sites at higher representation levels. There were a large number of alternate spatial configurations at all representation levels, although generally fewer among one and two versus three- and four-site solutions. These differences were less pronounced among simulations targeting habitat representation, suggesting that a similar degree of flexibility is inherent in the spatial arrangement of potential protected area systems containing one versus several sites for similar levels of habitat representation. We attribute these results to the distribution of sediment and depth zones within the Sanctuary, and to the fact that even levels of representation were sought in each scenario. (PDF contains 33 pages.)
Resumo:
A set of downloadable resources containing information on course structures
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
This dissertation reformulates and streamlines the core tools of robustness analysis for linear time invariant systems using now-standard methods in convex optimization. In particular, robust performance analysis can be formulated as a primal convex optimization in the form of a semidefinite program using a semidefinite representation of a set of Gramians. The same approach with semidefinite programming duality is applied to develop a linear matrix inequality test for well-connectedness analysis, and many existing results such as the Kalman-Yakubovich--Popov lemma and various scaled small gain tests are derived in an elegant fashion. More importantly, unlike the classical approach, a decision variable in this novel optimization framework contains all inner products of signals in a system, and an algorithm for constructing an input and state pair of a system corresponding to the optimal solution of robustness optimization is presented based on this information. This insight may open up new research directions, and as one such example, this dissertation proposes a semidefinite programming relaxation of a cardinality constrained variant of the H ∞ norm, which we term sparse H ∞ analysis, where an adversarial disturbance can use only a limited number of channels. Finally, sparse H ∞ analysis is applied to the linearized swing dynamics in order to detect potential vulnerable spots in power networks.
Resumo:
H. J. Kushner has obtained the differential equation satisfied by the optimal feedback control law for a stochastic control system in which the plant dynamics and observations are perturbed by independent additive Gaussian white noise processes. However, the differentiation includes the first and second functional derivatives and, except for a restricted set of systems, is too complex to solve with present techniques.
This investigation studies the optimal control law for the open loop system and incorporates it in a sub-optimal feedback control law. This suboptimal control law's performance is at least as good as that of the optimal control function and satisfies a differential equation involving only the first functional derivative. The solution of this equation is equivalent to solving two two-point boundary valued integro-partial differential equations. An approximate solution has advantages over the conventional approximate solution of Kushner's equation.
As a result of this study, well known results of deterministic optimal control are deduced from the analysis of optimal open loop control.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. © 2010 Nagengast et al.
Resumo:
A new method for the optimal design of Functionally Graded Materials (FGM) is proposed in this paper. Instead of using the widely used explicit functional models, a feature tree based procedural model is proposed to represent generic material heterogeneities. A procedural model of this sort allows more than one explicit function to be incorporated to describe versatile material gradations and the material composition at a given location is no longer computed by simple evaluation of an analytic function, but obtained by execution of customizable procedures. This enables generic and diverse types of material variations to be represented, and most importantly, by a reasonably small number of design variables. The descriptive flexibility in the material heterogeneity formulation as well as the low dimensionality of the design vectors help facilitate the optimal design of functionally graded materials. Using the nature-inspired Particle Swarm Optimization (PSO) method, functionally graded materials with generic distributions can be efficiently optimized. We demonstrate, for the first time, that a PSO based optimizer outperforms classical mathematical programming based methods, such as active set and trust region algorithms, in the optimal design of functionally graded materials. The underlying reason for this performance boost is also elucidated with the help of benchmarked examples. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
The 'optimal' or 'best' design process may be the shortest or cheapest process, or the one that leads to a particularly desirable product, or to a reliable and maintainable product, or to a manufacturable product, or some combination of all of these. It is likely to satisfy the aspirations of the organisation to invest an appropriate amount of resource in the development of a specific new market opportunity, set in the context of longer-term business goals. This paper describes the progress made in over ten years of research on process modelling undertaken at the Cambridge Engineering Design Centre to identify an 'optimal' design process with which to develop an 'adequate' product.
Resumo:
This paper presents the development and the application of a multi-objective optimization framework for the design of two-dimensional multi-element high-lift airfoils. An innovative and efficient optimization algorithm, namely Multi-Objective Tabu Search (MOTS), has been selected as core of the framework. The flow-field around the multi-element configuration is simulated using the commercial computational fluid dynamics (cfd) suite Ansys cfx. Elements shape and deployment settings have been considered as design variables in the optimization of the Garteur A310 airfoil, as presented here. A validation and verification process of the cfd simulation for the Garteur airfoil is performed using available wind tunnel data. Two design examples are presented in this study: a single-point optimization aiming at concurrently increasing the lift and drag performance of the test case at a fixed angle of attack and a multi-point optimization. The latter aims at introducing operational robustness and off-design performance into the design process. Finally, the performance of the MOTS algorithm is assessed by comparison with the leading NSGA-II (Non-dominated Sorting Genetic Algorithm) optimization strategy. An equivalent framework developed by the authors within the industrial sponsor environment is used for the comparison. To eliminate cfd solver dependencies three optimum solutions from the Pareto optimal set have been cross-validated. As a result of this study MOTS has been demonstrated to be an efficient and effective algorithm for aerodynamic optimizations. Copyright © 2012 Tech Science Press.
Resumo:
The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.
Resumo:
Deciding whether a set of objects are the same or different is a cornerstone of perception and cognition. Surprisingly, no principled quantitative model of sameness judgment exists. We tested whether human sameness judgment under sensory noise can be modeled as a form of probabilistically optimal inference. An optimal observer would compare the reliability-weighted variance of the sensory measurements with a set size-dependent criterion. We conducted two experiments, in which we varied set size and individual stimulus reliabilities. We found that the optimal-observer model accurately describes human behavior, outperforms plausible alternatives in a rigorous model comparison, and accounts for three key findings in the animal cognition literature. Our results provide a normative footing for the study of sameness judgment and indicate that the notion of perception as near-optimal inference extends to abstract relations.