868 resultados para Lagrangian bounds in optimization problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for sigma point placement, potentially causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. © 2011 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The solution time of the online optimization problems inherent to Model Predictive Control (MPC) can become a critical limitation when working in embedded systems. One proposed approach to reduce the solution time is to split the optimization problem into a number of reduced order problems, solve such reduced order problems in parallel and selecting the solution which minimises a global cost function. This approach is known as Parallel MPC. The potential capabilities of disturbance rejection are introduced using a simulation example. The algorithm is implemented in a linearised model of a Boeing 747-200 under nominal flight conditions and with an induced wind disturbance. Under significant output disturbances Parallel MPC provides a significant improvement in performance when compared to Multiplexed MPC (MMPC) and Linear Quadratic Synchronous MPC (SMPC). © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

4.2 K photoluminescence (PL) and 77 K standard Hall-effect measurements were performed for In0.52Al0.48As/InxGa1-xAs metamorphic high-electron-mobility-transistor (HEMT) structures grown on GaAs substrates with different indium contents in the InxGa1-xAs well or different Si delta-doping concentrations. It was found that electron concentrations increased with increasing PL intensity ratio of the "forbidden" transition (the second electron subband to the first heavy-hole subband) to the sum of the "allowed" transition (the first electron subband to the first heavy-hole subband) and the forbidden transition. And electron mobilities decreased with increasing product of the average full width at half maximum of allowed and forbidden transitions and the electron effective mass in the InxGa1-xAs quantum well. These results show that PL measurements are a good supplemental tool to Hall-effect measurements in optimization of the HEMT layer structure. (c) 2006 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

许多问题最终可以归结为求解一个组合优化问题,GA是求解组合优化问题的一个强有力的工具,但遗传算法在应用中常出现收敛过慢和封闭竞争问题,本文提出贪心遗传算法。该算法的初始种群建立、交叉和变异等过程,都引入贪心选择策略指导搜索;移民操作向种群引进新的遗传物质,克服了封闭竞争缺点。贪心遗传算法可以避免早熟收敛并改进算法的性能,算法搜索起步阶段的效率是非常高的,本文通过TSP问题仿真试验证明了算法的有效性,在较少的计算量下,得到令人满意的结果。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary approaches for people to understand the inner properties of the earth and the distribution of the mineral resources are mainly coming from surface geology survey and geophysical/geochemical data inversion and interpretation. The purpose of seismic inversion is to extract information of the subsurface stratum geometrical structures and the distribution of material properties from seismic wave which is used for resource prospecting, exploitation and the study for inner structure of the earth and its dynamic process. Although the study of seismic parameter inversion has achieved a lot since 1950s, some problems are still persisting when applying in real data due to their nonlinearity and ill-posedness. Most inversion methods we use to invert geophysical parameters are based on iterative inversion which depends largely on the initial model and constraint conditions. It would be difficult to obtain a believable result when taking into consideration different factors such as environmental and equipment noise that exist in seismic wave excitation, propagation and acquisition. The seismic inversion based on real data is a typical nonlinear problem, which means most of their objective functions are multi-minimum. It makes them formidable to be solved using commonly used methods such as general-linearization and quasi-linearization inversion because of local convergence. Global nonlinear search methods which do not rely heavily on the initial model seem more promising, but the amount of computation required for real data process is unacceptable. In order to solve those problems mentioned above, this paper addresses a kind of global nonlinear inversion method which brings Quantum Monte Carlo (QMC) method into geophysical inverse problems. QMC has been used as an effective numerical method to study quantum many-body system which is often governed by Schrödinger equation. This method can be categorized into zero temperature method and finite temperature method. This paper is subdivided into four parts. In the first one, we briefly review the theory of QMC method and find out the connections with geophysical nonlinear inversion, and then give the flow chart of the algorithm. In the second part, we apply four QMC inverse methods in 1D wave equation impedance inversion and generally compare their results with convergence rate and accuracy. The feasibility, stability, and anti-noise capacity of the algorithms are also discussed within this chapter. Numerical results demonstrate that it is possible to solve geophysical nonlinear inversion and other nonlinear optimization problems by means of QMC method. They are also showing that Green’s function Monte Carlo (GFMC) and diffusion Monte Carlo (DMC) are more applicable than Path Integral Monte Carlo (PIMC) and Variational Monte Carlo (VMC) in real data. The third part provides the parallel version of serial QMC algorithms which are applied in a 2D acoustic velocity inversion and real seismic data processing and further discusses these algorithms’ globality and anti-noise capacity. The inverted results show the robustness of these algorithms which make them feasible to be used in 2D inversion and real data processing. The parallel inversion algorithms in this chapter are also applicable in other optimization. Finally, some useful conclusions are obtained in the last section. The analysis and comparison of the results indicate that it is successful to bring QMC into geophysical inversion. QMC is a kind of nonlinear inversion method which guarantees stability, efficiency and anti-noise. The most appealing property is that it does not rely heavily on the initial model and can be suited to nonlinear and multi-minimum geophysical inverse problems. This method can also be used in other filed regarding nonlinear optimization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One very useful idea in AI research has been the notion of an explicit model of a problem situation. Procedural deduction languages, such as PLANNER, have been valuable tools for building these models. But PLANNER and its relatives are very limited in their ability to describe situations which are only partially specified. This thesis explores methods of increasing the ability of procedural deduction systems to deal with incomplete knowledge. The thesis examines in detail, problems involving negation, implication, disjunction, quantification, and equality. Control structure issues and the problem of modelling change under incomplete knowledge are also considered. Extensive comparisons are also made with systems for mechanica theorem proving.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates what knowledge is necessary to solve mechanics problems. A program NEWTON is described which understands and solves problems in mechanics mini-world of objects moving on surfaces. Facts and equations such as those given in mechanics text need to be represented. However, this is far from sufficient to solve problems. Human problem solvers rely on "common sense" and "qualitative" knowledge which the physics text tacitly assumes to be present. A mechanics problem solver must embody such knowledge. Quantitative knowledge given by equations and more qualitative common sense knowledge are the major research points exposited in this thesis. The major issue in solving problems is planning. Planning involves tentatively outlining a possible path to the solution without actually solving the problem. Such a plan needs to be constructed and debugged in the process of solving the problem. Envisionment, or qualitative simulation of the event, plays a central role in this planning process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents simulated computational fluid dynamics (CFD) results for comparison against experimental data. The performance of four turbulence models has been assessed for electronic application areas considering both fluid flow and heat transfer phenomenon. CFD is vast becoming a powerful and almost essential tool for design, development and optimization in engineering problems. However turbulence models remain to be the key problem issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the performance of the turbulence model employed together with the wall functions implemented. To be able to resolve the abrupt changes in the turbulent energy and other parameters near the wall a particularly fine mesh is necessary which unfortunately increases the computer storage capacity requirements. The objective of turbulence modelling is to enhance computational procdures of sufficient acccuracy and generality for engineers to anticipate the Reynolds stresses and the scalar transport terms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agglomerative cluster analyses encompass many techniques, which have been widely used in various fields of science. In biology, and specifically ecology, datasets are generally highly variable and may contain outliers, which increase the difficulty to identify the number of clusters. Here we present a new criterion to determine statistically the optimal level of partition in a classification tree. The criterion robustness is tested against perturbated data (outliers) using an observation or variable with values randomly generated. The technique, called Random Simulation Test (RST), is tested on (1) the well-known Iris dataset [Fisher, R.A., 1936. The use of multiple measurements in taxonomic problems. Ann. Eugenic. 7, 179–188], (2) simulated data with predetermined numbers of clusters following Milligan and Cooper [Milligan, G.W., Cooper, M.C., 1985. An examination of procedures for determining the number of clusters in a data set. Psychometrika 50, 159–179] and finally (3) is applied on real copepod communities data previously analyzed in Beaugrand et al. [Beaugrand, G., Ibanez, F., Lindley, J.A., Reid, P.C., 2002. Diversity of calanoid copepods in the North Atlantic and adjacent seas: species associations and biogeography. Mar. Ecol. Prog. Ser. 232, 179–195]. The technique is compared to several standard techniques. RST performed generally better than existing algorithms on simulated data and proved to be especially efficient with highly variable datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During a 25 d Lagrangian study in May and June 1990 in the Northeast Atlantic Ocean, marine snow aggregates were collected using a novel water bottle, and the composition was determined microscopically. The aggregates contained a characteristic signature of a matrix of bacteria, cyanobacteria and autotrophic picoplankton with inter alia inclusions of the tintiniid Dictyocysta elegans and large pennate diatoms. The concentration of bacteria and cyanobacteria was much greater on the aggregates than when free-living by factors of 100 to 6000 and 3000 to 2 500 000, respectively, depending on depth. Various species of crustacean plankton and micronekton were collected, and the faecal pellets produced after capture were examined. These often contained the marine snow signature, indicating that these organisms had been consuming marine snow. In some cases, marine snow material appeared to dominate the diet. This implies a food-chain short cut wherby material, normally too small to be consumed by the mesozooplankton, and considered to constitute the diet of the microplankton can become part of the diet of organisms higher in the food-chain. The micronekton was dominated by the amphipod Themisto compressa, whose pellets also contained the marine snow signature. Shipboard incubation experiments with this species indicated that (1) it does consume marine snow, and (2) its gut-passage time is sufficiently long for material it has eaten in the upper water to be defecated at its day-time depth of several hundred meters. Plankton and micronekton were collected with nets to examine their vertical distribution and diel migration and to put into context the significance of the flux of material in the guts of migrants. “Gut flux” for the T. compressa population was calculated to be up to 2% of the flux measured simultaneously by drifting sediment traps and <5% when all migrants are considered. The in situ abundance and distribution of marine snow aggregates (>0.6 mm) was examined photographically. A sharp concentration peak was usually encountered in the depth range 40 to 80 m which was not associated with peaks of in situ fluorescence or attenuation but was just below or at the base of the upper mixed layer. The feeding behaviour of zooplankton and nekton may influence these concentration gradients to a considerable extent, and hence affect the flux due to passive settling of marine snow aggregates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the United Kingdom there is growing awareness of the need to identify and support the small number of children who are living in families experiencing multiple problems. Research indicates that adverse experiences in childhood can result in poor outcomes in adulthood in terms of lack of employment, poorer physical and mental health and increases in social problems experienced. It is acknowledged that most of these children are known to child welfare professionals and that some are referred to social services, subsequently entering the child protection system. This paper reports research conducted with twenty-eight experienced child welfare professionals. It explores their views about families known to the child protection system with long-term and complex needs in relation to the characteristics of children and their families; the process of intervention with families; and the effects of organisational arrangements on practice. The research indicates that these families are characterised by the range and depth of the problems experienced by the adults, such as domestic violence, mental health difficulties and substance misuse problems, and the need for professionals to have good inter-personal skills and access to specialist therapeutic services if families are to be supported to address their problems.