880 resultados para Vehicle Routing Problem Multi-Trip Ricerca Operativa TSP VRP


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we study at perturbative level correlation functions of Wilson loops (and local operators) and their relations to localization, integrability and other quantities of interest as the cusp anomalous dimension and the Bremsstrahlung function. First of all we consider a general class of 1/8 BPS Wilson loops and chiral primaries in N=4 Super Yang-Mills theory. We perform explicit two-loop computations, for some particular but still rather general configuration, that confirm the elegant results expected from localization procedure. We find notably full consistency with the multi-matrix model averages, obtained from 2D Yang-Mills theory on the sphere, when interacting diagrams do not cancel and contribute non-trivially to the final answer. We also discuss the near BPS expansion of the generalized cusp anomalous dimension with L units of R-charge. Integrability provides an exact solution, obtained by solving a general TBA equation in the appropriate limit: we propose here an alternative method based on supersymmetric localization. The basic idea is to relate the computation to the vacuum expectation value of certain 1/8 BPS Wilson loops with local operator insertions along the contour. Also these observables localize on a two-dimensional gauge theory on S^2, opening the possibility of exact calculations. As a test of our proposal, we reproduce the leading Luscher correction at weak coupling to the generalized cusp anomalous dimension. This result is also checked against a genuine Feynman diagram approach in N=4 super Yang-Mills theory. Finally we study the cusp anomalous dimension in N=6 ABJ(M) theory, identifying a scaling limit in which the ladder diagrams dominate. The resummation is encoded into a Bethe-Salpeter equation that is mapped to a Schroedinger problem, exactly solvable due to the surprising supersymmetry of the effective Hamiltonian. In the ABJ case the solution implies the diagonalization of the U(N) and U(M) building blocks, suggesting the existence of two independent cusp anomalous dimensions and an unexpected exponentation structure for the related Wilson loops.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obiettivo principale della politica di sicurezza alimentare è quello di garantire la salute dei consumatori attraverso regole e protocolli di sicurezza specifici. Al fine di rispondere ai requisiti di sicurezza alimentare e standardizzazione della qualità, nel 2002 il Parlamento Europeo e il Consiglio dell'UE (Regolamento (CE) 178/2002 (CE, 2002)), hanno cercato di uniformare concetti, principi e procedure in modo da fornire una base comune in materia di disciplina degli alimenti e mangimi provenienti da Stati membri a livello comunitario. La formalizzazione di regole e protocolli di standardizzazione dovrebbe però passare attraverso una più dettagliata e accurata comprensione ed armonizzazione delle proprietà globali (macroscopiche), pseudo-locali (mesoscopiche), ed eventualmente, locali (microscopiche) dei prodotti alimentari. L'obiettivo principale di questa tesi di dottorato è di illustrare come le tecniche computazionali possano rappresentare un valido supporto per l'analisi e ciò tramite (i) l’applicazione di protocolli e (ii) miglioramento delle tecniche ampiamente applicate. Una dimostrazione diretta delle potenzialità già offerte dagli approcci computazionali viene offerta nel primo lavoro in cui un virtual screening basato su docking è stato applicato al fine di valutare la preliminare xeno-androgenicità di alcuni contaminanti alimentari. Il secondo e terzo lavoro riguardano lo sviluppo e la convalida di nuovi descrittori chimico-fisici in un contesto 3D-QSAR. Denominata HyPhar (Hydrophobic Pharmacophore), la nuova metodologia così messa a punto è stata usata per esplorare il tema della selettività tra bersagli molecolari strutturalmente correlati e ha così dimostrato di possedere i necessari requisiti di applicabilità e adattabilità in un contesto alimentare. Nel complesso, i risultati ci permettono di essere fiduciosi nel potenziale impatto che le tecniche in silico potranno avere nella identificazione e chiarificazione di eventi molecolari implicati negli aspetti tossicologici e nutrizionali degli alimenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relationships between clustering, description length, and regularisation are pointed out, motivating the introduction of a cost function with a description length interpretation and the unusual and useful property of having its minimum approximated by the densest mode of a distribution. A simple inverse kinematics example is used to demonstrate that this property can be used to select and learn one branch of a multi-valued mapping. This property is also used to develop a method for setting regularisation parameters according to the scale on which structure is exhibited in the training data. The regularisation technique is demonstrated on two real data sets, a classification problem and a regression problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Group decision making is the study of identifying and selecting alternatives based on the values and preferences of the decision maker. Making a decision implies that there are several alternative choices to be considered. This paper uses the concept of Data Envelopment Analysis to introduce a new mathematical method for selecting the best alternative in a group decision making environment. The introduced model is a multi-objective function which is converted into a multi-objective linear programming model from which the optimal solution is obtained. A numerical example shows how the new model can be applied to rank the alternatives or to choose a subset of the most promising alternatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supplier evaluation and selection problem has been studied extensively. Various decision making approaches have been proposed to tackle the problem. In contemporary supply chain management, the performance of potential suppliers is evaluated against multiple criteria rather than considering a single factor-cost. This paper reviews the literature of the multi-criteria decision making approaches for supplier evaluation and selection. Related articles appearing in the international journals from 2000 to 2008 are gathered and analyzed so that the following three questions can be answered: (i) Which approaches were prevalently applied? (ii) Which evaluating criteria were paid more attention to? (iii) Is there any inadequacy of the approaches? Based on the inadequacy, if any, some improvements and possible future work are recommended. This research not only provides evidence that the multi-criteria decision making approaches are better than the traditional cost-based approach, but also aids the researchers and decision makers in applying the approaches effectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work introduces a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. Convergence of the output error for the proposed control method is verified by using a Lyapunov function. Several simulation examples are provided to demonstrate the efficiency of the developed control method. The manner in which such a method is extended to nonlinear multi-variable systems with different delays between the input-output pairs is considered and demonstrated through simulation examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional machinery for manufacturing processes are characterised by actuators powered and co-ordinated by mechanical linkages driven from a central drive. Increasingly, these linkages are replaced by independent electrical drives, each performs a different task and follows a different motion profile, co-ordinated by computers. A design methodology for the servo control of high speed multi-axis machinery is proposed, based on the concept of a highly adaptable generic machine model. In addition to the dynamics of the drives and the loads, the model includes the inherent interactions between the motion axes and thus provides a Multi-Input Multi-Output (MIMO) description. In general, inherent interactions such as structural couplings between groups of motion axes are undesirable and needed to be compensated. On the other hand, imposed interactions such as the synchronisation of different groups of axes are often required. It is recognised that a suitable MIMO controller can simultaneously achieve these objectives and reconciles their potential conflicts. Both analytical and numerical methods for the design of MIMO controllers are investigated. At present, it is not possible to implement high order MIMO controllers for practical reasons. Based on simulations of the generic machine model under full MIMO control, however, it is possible to determine a suitable topology for a blockwise decentralised control scheme. The Block Relative Gain array (BRG) is used to compare the relative strength of closed loop interactions between sub-systems. A number of approaches to the design of the smaller decentralised MIMO controllers for these sub-systems has been investigated. For the purpose of illustration, a benchmark problem based on a 3 axes test rig has been carried through the design cycle to demonstrate the working of the design methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The loss of habitat and biodiversity worldwide has led to considerable resources being spent for conservation purposes on actions such as the acquisition and management of land, the rehabilitation of degraded habitats, and the purchase of easements from private landowners. Prioritising these actions is challenging due to the complexity of the problem and because there can be multiple actors undertaking conservation actions, often with divergent or partially overlapping objectives. We use a modelling framework to explore this issue with a study involving two agents sequentially purchasing land for conservation. We apply our model to simulated data using distributions taken from real data to simulate the cost of patches and the rarity and co-occurence of species. In our model each agent attempted to implement a conservation network that met its target for the minimum cost using the conservation planning software Marxan. We examine three scenarios where the conservation targets of the agents differ. The first scenario (called NGO-NGO) models the situation where two NGOs are both are targeting different sets of threatened species. The second and third scenarios (called NGO-Gov and Gov-NGO, respectively) represent a case where a government agency attempts to implement a complementary conservation network representing all species, while an NGO is focused on achieving additional protection for the most endangered species. For each of these scenarios we examined three types of interactions between agents: i) acting in isolation where the agents are attempting to achieve their targets solely though their own actions ii) sharing information where each agent is aware of the species representation achieved within the other agent’s conservation network and, iii) pooling resources where agents combine their resources and undertake conservation actions as a single entity. The latter two interactions represent different types of collaborations and in each scenario we determine the cost savings from sharing information or pooling resources. In each case we examined the utility of these interactions from the viewpoint of the combined conservation network resulting from both agents' actions, as well as from each agent’s individual perspective. The costs for each agent to achieve their objectives varied depending on the order in which the agents acted, the type of interaction between agents, and the specific goals of each agent. There were significant cost savings from increased collaboration via sharing information in the NGO-NGO scenario were the agent’s representation goals were mutually exclusive (in terms of specie targeted). In the NGO-Gov and Gov-NGO scenarios, collaboration generated much smaller savings. If the two agents collaborate by pooling resources there are multiple ways the total cost could be shared between both agents. For each scenario we investigate the costs and benefits for all possible cost sharing proportions. We find that there are a range of cost sharing proportions where both agents can benefit in the NGO-NGO scenarios while the NGO-Gov and Gov-NGO scenarios again showed little benefit. Although the model presented here has a range of simplifying assumptions, it demonstrates that the value of collaboration can vary significantly in different situations. In most cases, collaborating would have associated costs and these costs need to be weighed against the potential benefits from collaboration. The model demonstrates a method for determining the range of collaboration costs that would result in collaboration providing an efficient use of scarce conservation resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investment in capacity expansion remains one of the most critical decisions for a manufacturing organisation with global production facilities. Multiple factors need to be considered making the decision process very complex. The purpose of this paper is to establish the state-of-the-art in multi-factor models for capacity expansion of manufacturing plants within a corporation. The research programme consisting of an extensive literature review and a structured assessment of the strengths and weaknesses of the current research is presented. The study found that there is a wealth of mathematical multi-factor models for evaluating capacity expansion decisions however no single contribution captures all the different facets of the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The loss of habitat and biodiversity worldwide has led to considerable resources being spent on conservation interventions. Prioritising these actions is challenging due to the complexity of the problem and because there can be multiple actors undertaking conservation actions, often with divergent or partially overlapping objectives. We explore this issue with a simulation study involving two agents sequentially purchasing land for the conservation of multiple species using three scenarios comprising either divergent or partially overlapping objectives between the agents. The first scenario investigates the situation where both agents are targeting different sets of threatened species. The second and third scenarios represent a case where a government agency attempts to implement a complementary conservation network representing 200 species, while a non-government organisation is focused on achieving additional protection for the ten rarest species. Simulated input data was generated using distributions taken from real data to model the cost of parcels, and the rarity and co-occurrence of species. We investigated three types of collaborative interactions between agents: acting in isolation, sharing information and pooling resources with the third option resulting in the agents combining their resources and effectively acting as a single entity. In each scenario we determine the cost savings when an agent moves from acting in isolation to either sharing information or pooling resources with the other agent. The model demonstrates how the value of collaboration can vary significantly in different situations. In most cases, collaborating would have associated costs and these costs need to be weighed against the potential benefits from collaboration. Our model demonstrates a method for determining the range of costs that would result in collaboration providing an efficient use of scarce conservation resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the problem of obtaining a dense reconstruction in real-time, from a live video stream. In recent years, multi-view stereo (MVS) has received considerable attention and a number of methods have been proposed. However, most methods operate under the assumption of a relatively sparse set of still images as input and unlimited computation time. Video based MVS has received less attention despite the fact that video sequences offer significant benefits in terms of usability of MVS systems. In this paper we propose a novel video based MVS algorithm that is suitable for real-time, interactive 3d modeling with a hand-held camera. The key idea is a per-pixel, probabilistic depth estimation scheme that updates posterior depth distributions with every new frame. The current implementation is capable of updating 15 million distributions/s. We evaluate the proposed method against the state-of-the-art real-time MVS method and show improvement in terms of accuracy. © 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.