894 resultados para Many-to-many-assignment problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the Stokes conjecture concerning the shape of extreme two-dimensional water waves. By new geometric methods including a nonlinear frequency formula, we prove the Stokes conjecture in the original variables. Our results do not rely on structural assumptions needed in previous results such as isolated singularities, symmetry and monotonicity. Part of our results extends to the mathematical problem in higher dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimisation and Parameter Estimation (DISOPE) which has been designed to achieve the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A method based on Broyden's ideas is used for approximating some derivative trajectories required. Ways for handling con straints on both manipulated and state variables are described. Further, a method for coping with batch-to- batch dynamic variations in the process, which are common in practice, is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch processes. The algorithm is success fully applied to a benchmark problem consisting of the input profile optimisation of a fed-batch fermentation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patients with mental health difficulties do not always receive appropriate and recommended psychological treatment for their difficulties, and clinicians are not always appropriately trained to deliver them. This paper considers why this might be the case and provides an overview of the Charlie Waller Institute, a not-for-profit organisation funded by the NHS, University of Reading, and the Charlie Waller Memorial Trust. The Institute seeks to address this problem by training clinicians in wide variety of evidence-based therapies and assessing the impact of this training on clinician knowledge and skill.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate a simplified form of variational data assimilation in a fully nonlinear framework with the aim of extracting dynamical development information from a sequence of observations over time. Information on the vertical wind profile, w(z ), and profiles of temperature, T (z , t), and total water content, qt (z , t), as functions of height, z , and time, t, are converted to brightness temperatures at a single horizontal location by defining a two-dimensional (vertical and time) variational assimilation testbed. The profiles of T and qt are updated using a vertical advection scheme. A basic cloud scheme is used to obtain the fractional cloud amount and, when combined with the temperature field, this information is converted into a brightness temperature, using a simple radiative transfer scheme. It is shown that our model exhibits realistic behaviour with regard to the prediction of cloud, but the effects of nonlinearity become non-negligible in the variational data assimilation algorithm. A careful analysis of the application of the data assimilation scheme to this nonlinear problem is presented, the salient difficulties are highlighted, and suggestions for further developments are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The robustness of state feedback solutions to the problem of partial pole placement obtained by a new projection procedure is examined. The projection procedure gives a reduced-order pole assignment problem. It is shown that the sensitivities of the assigned poles in the complete closed-loop system are bounded in terms of the sensitivities of the assigned reduced-order poles, and the sensitivities of the unaltered poles are bounded in terms of the sensitivities of the corresponding open-loop poles. If the assigned poles are well-separated from the unaltered poles, these bounds are expected to be tight. The projection procedure is described in [3], and techniques for finding robust (or insensitive) solutions to the reduced-order problem are given in [1], [2].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We prove unique existence of solution for the impedance (or third) boundary value problem for the Helmholtz equation in a half-plane with arbitrary L∞ boundary data. This problem is of interest as a model of outdoor sound propagation over inhomogeneous flat terrain and as a model of rough surface scattering. To formulate the problem and prove uniqueness of solution we introduce a novel radiation condition, a generalization of that used in plane wave scattering by one-dimensional diffraction gratings. To prove existence of solution and a limiting absorption principle we first reformulate the problem as an equivalent second kind boundary integral equation to which we apply a form of Fredholm alternative, utilizing recent results on the solvability of integral equations on the real line in [5].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatially dense observations of gust speeds are necessary for various applications, but their availability is limited in space and time. This work presents an approach to help to overcome this problem. The main objective is the generation of synthetic wind gust velocities. With this aim, theoretical wind and gust distributions are estimated from 10 yr of hourly observations collected at 123 synoptic weather stations provided by the German Weather Service. As pre-processing, an exposure correction is applied on measurements of the mean wind velocity to reduce the influence of local urban and topographic effects. The wind gust model is built as a transfer function between distribution parameters of wind and gust velocities. The aim of this procedure is to estimate the parameters of gusts at stations where only wind speed data is available. These parameters can be used to generate synthetic gusts, which can improve the accuracy of return periods at test sites with a lack of observations. The second objective is to determine return periods much longer than the nominal length of the original time series by considering extreme value statistics. Estimates for both local maximum return periods and average return periods for single historical events are provided. The comparison of maximum and average return periods shows that even storms with short average return periods may lead to local wind gusts with return periods of several decades. Despite uncertainties caused by the short length of the observational records, the method leads to consistent results, enabling a wide range of possible applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Broad-scale phylogenetic analyses of the angiosperms and of the Asteridae have failed to confidently resolve relationships among the major lineages of the campanulid Asteridae (i.e., the euasterid II of APG II, 2003). To address this problem we assembled presently available sequences for a core set of 50 taxa, representing the diversity of the four largest lineages (Apiales, Aquifoliales, Asterales, Dipsacales) as well as the smaller ""unplaced"" groups (e.g., Bruniaceae, Paracryphiaceae, Columelliaceae). We constructed four data matrices for phylogenetic analysis: a chloroplast coding matrix (atpB, matK, ndhF, rbcL), a chloroplast non-coding matrix (rps16 intron, trnT-F region, trnV-atpE IGS), a combined chloroplast dataset (all seven chloroplast regions), and a combined genome matrix (seven chloroplast regions plus 18S and 26S rDNA). Bayesian analyses of these datasets using mixed substitution models produced often well-resolved and supported trees. Consistent with more weakly supported results from previous studies, our analyses support the monophyly of the four major clades and the relationships among them. Most importantly, Asterales are inferred to be sister to a clade containing Apiales and Dipsacales. Paracryphiaceae is consistently placed sister to the Dipsacales. However, the exact relationships of Bruniaceae, Columelliaceae, and an Escallonia clade depended upon the dataset. Areas of poor resolution in combined analyses may be partly explained by conflict between the coding and non-coding data partitions. We discuss the implications of these results for our understanding of campanulid phylogeny and evolution, paying special attention to how our findings bear on character evolution and biogeography in Dipsacales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given two maps h : X x K -> R and g : X -> K such that, for all x is an element of X, h(x, g(x)) = 0, we consider the equilibrium problem of finding (x) over tilde is an element of X such that h((x) over tilde, g(x)) >= 0 for every x is an element of X. This question is related to a coincidence problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2D electrophoresis is a well-known method for protein separation which is extremely useful in the field of proteomics. Each spot in the image represents a protein accumulation and the goal is to perform a differential analysis between pairs of images to study changes in protein content. It is thus necessary to register two images by finding spot correspondences. Although it may seem a simple task, generally, the manual processing of this kind of images is very cumbersome, especially when strong variations between corresponding sets of spots are expected (e.g. strong non-linear deformations and outliers). In order to solve this problem, this paper proposes a new quadratic assignment formulation together with a correspondence estimation algorithm based on graph matching which takes into account the structural information between the detected spots. Each image is represented by a graph and the task is to find a maximum common subgraph. Successful experimental results using real data are presented, including an extensive comparative performance evaluation with ground-truth data. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the late seventies, Megiddo proposed a way to use an algorithm for the problem of minimizing a linear function a(0) + a(1)x(1) + ... + a(n)x(n) subject to certain constraints to solve the problem of minimizing a rational function of the form (a(0) + a(1)x(1) + ... + a(n)x(n))/(b(0) + b(1)x(1) + ... + b(n)x(n)) subject to the same set of constraints, assuming that the denominator is always positive. Using a rather strong assumption, Hashizume et al. extended Megiddo`s result to include approximation algorithms. Their assumption essentially asks for the existence of good approximation algorithms for optimization problems with possibly negative coefficients in the (linear) objective function, which is rather unusual for most combinatorial problems. In this paper, we present an alternative extension of Megiddo`s result for approximations that avoids this issue and applies to a large class of optimization problems. Specifically, we show that, if there is an alpha-approximation for the problem of minimizing a nonnegative linear function subject to constraints satisfying a certain increasing property then there is an alpha-approximation (1 1/alpha-approximation) for the problem of minimizing (maximizing) a nonnegative rational function subject to the same constraints. Our framework applies to covering problems and network design problems, among others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.