813 resultados para Constraint based modelling


Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo principal de esta tesis doctoral es profundizar en el análisis y diseño de un sistema inteligente para la predicción y control del acabado superficial en un proceso de fresado a alta velocidad, basado fundamentalmente en clasificadores Bayesianos, con el prop´osito de desarrollar una metodolog´ıa que facilite el diseño de este tipo de sistemas. El sistema, cuyo propósito es posibilitar la predicción y control de la rugosidad superficial, se compone de un modelo aprendido a partir de datos experimentales con redes Bayesianas, que ayudar´a a comprender los procesos dinámicos involucrados en el mecanizado y las interacciones entre las variables relevantes. Dado que las redes neuronales artificiales son modelos ampliamente utilizados en procesos de corte de materiales, también se incluye un modelo para fresado usándolas, donde se introdujo la geometría y la dureza del material como variables novedosas hasta ahora no estudiadas en este contexto. Por lo tanto, una importante contribución en esta tesis son estos dos modelos para la predicción de la rugosidad superficial, que se comparan con respecto a diferentes aspectos: la influencia de las nuevas variables, los indicadores de evaluación del desempeño, interpretabilidad. Uno de los principales problemas en la modelización con clasificadores Bayesianos es la comprensión de las enormes tablas de probabilidad a posteriori producidas. Introducimos un m´etodo de explicación que genera un conjunto de reglas obtenidas de árboles de decisión. Estos árboles son inducidos a partir de un conjunto de datos simulados generados de las probabilidades a posteriori de la variable clase, calculadas con la red Bayesiana aprendida a partir de un conjunto de datos de entrenamiento. Por último, contribuimos en el campo multiobjetivo en el caso de que algunos de los objetivos no se puedan cuantificar en números reales, sino como funciones en intervalo de valores. Esto ocurre a menudo en aplicaciones de aprendizaje automático, especialmente las basadas en clasificación supervisada. En concreto, se extienden las ideas de dominancia y frontera de Pareto a esta situación. Su aplicación a los estudios de predicción de la rugosidad superficial en el caso de maximizar al mismo tiempo la sensibilidad y la especificidad del clasificador inducido de la red Bayesiana, y no solo maximizar la tasa de clasificación correcta. Los intervalos de estos dos objetivos provienen de un m´etodo de estimación honesta de ambos objetivos, como e.g. validación cruzada en k rodajas o bootstrap.---ABSTRACT---The main objective of this PhD Thesis is to go more deeply into the analysis and design of an intelligent system for surface roughness prediction and control in the end-milling machining process, based fundamentally on Bayesian network classifiers, with the aim of developing a methodology that makes easier the design of this type of systems. The system, whose purpose is to make possible the surface roughness prediction and control, consists of a model learnt from experimental data with the aid of Bayesian networks, that will help to understand the dynamic processes involved in the machining and the interactions among the relevant variables. Since artificial neural networks are models widely used in material cutting proceses, we include also an end-milling model using them, where the geometry and hardness of the piecework are introduced as novel variables not studied so far within this context. Thus, an important contribution in this thesis is these two models for surface roughness prediction, that are then compared with respecto to different aspects: influence of the new variables, performance evaluation metrics, interpretability. One of the main problems with Bayesian classifier-based modelling is the understanding of the enormous posterior probabilitiy tables produced. We introduce an explanation method that generates a set of rules obtained from decision trees. Such trees are induced from a simulated data set generated from the posterior probabilities of the class variable, calculated with the Bayesian network learned from a training data set. Finally, we contribute in the multi-objective field in the case that some of the objectives cannot be quantified as real numbers but as interval-valued functions. This often occurs in machine learning applications, especially those based on supervised classification. Specifically, the dominance and Pareto front ideas are extended to this setting. Its application to the surface roughness prediction studies the case of maximizing simultaneously the sensitivity and specificity of the induced Bayesian network classifier, rather than only maximizing the correct classification rate. Intervals in these two objectives come from a honest estimation method of both objectives, like e.g. k-fold cross-validation or bootstrap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Europeanized policy domains, executive actors are considered especially powerful because they are directly responsible for international negotiations. However, in order to avoid failing in the ratification process, they are also highly dependent on the support of domestic, non-state actors. We argue that in Europeanized decision-making processes, state actors are not passively lobbied, but actively seek collaboration with - and support from - domestic actors. We apply stochastic actor-based modelling for network dynamics to collaboration data on two successive bilateral agreements on the free movement of persons between Switzerland and the European Union (EU). Results confirm our hypotheses that state actors are not passively lobbied, but actively look for collaboration with other actors, and especially with potential veto players and euro-sceptical actors from both the conservative Right and the Left.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A simulation-based modelling approach is used to examine the effects of stratified seed dispersal (representing the distribution of the majority of dispersal around the maternal parent and also rare long-distance dispersal) on the genetic structure of maternally inherited genomes and the colonization rate of expanding plant populations. The model is parameterized to approximate postglacial oak colonization in the UK, but is relevant to plant populations that exhibit stratified seed dispersal. The modelling approach considers the colonization of individual plants over a large area (three 500 km x 10 km rolled transects are used to approximate a 500 km x 300 km area). Our approach shows how the interaction of plant population dynamics with stratified dispersal can result in a spatially patchy haplotype structure. We show that while both colonization speeds and the resulting genetic structure are influenced by the characteristics of the dispersal kernel, they are robust to changes in the periodicity of long-distance events, provided the average number of long-distance dispersal events remains constant. We also consider the effects of additional physical and environmental mechanisms on plant colonization. Results show significant changes in genetic structure when the initial colonization of different haplotypes is staggered over time and when a barrier to colonization is introduced. Environmental influences on survivorship and fecundity affect both the genetic structure and the speed of colonization. The importance of these mechanisms in relation to the postglacial spread and genetic structure of oak in the UK is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Physically based distributed models of catchment hydrology are likely to be made available as engineering tools in the near future. Although these models are based on theoretically acceptable equations of continuity, there are still limitations in the present modelling strategy. Of interest to this thesis are the current modelling assumptions made concerning the effects of soil spatial variability, including formations producing distinct zones of preferential flow. The thesis contains a review of current physically based modelling strategies and a field based assessment of soil spatial variability. In order to investigate the effects of soil nonuniformity a fully three dimensional model of variability saturated flow in porous media is developed. The model is based on a Galerkin finite element approximation to Richards equation. Accessibility to a vector processor permits numerical solutions on grids containing several thousand node points. The model is applied to a single hillslope segment under various degrees of soil spatial variability. Such variability is introduced by generating random fields of saturated hydraulic conductivity using the turning bands method. Similar experiments are performed under conditions of preferred soil moisture movement. The results show that the influence of soil variability on subsurface flow may be less significant than suggested in the literature, due to the integrating effects of three dimensional flow. Under conditions of widespread infiltration excess runoff, the results indicate a greater significance of soil nonuniformity. The recognition of zones of preferential flow is also shown to be an important factor in accurate rainfall-runoff modelling. Using the results of various fields of soil variability, experiments are carried out to assess the validity of the commonly used concept of `effective parameters'. The results of these experiments suggest that such a concept may be valid in modelling subsurface flow. However, the effective parameter is observed to be event dependent when the dominating mechanism is infiltration excess runoff.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-agent systems are complex systems comprised of multiple intelligent agents that act either independently or in cooperation with one another. Agent-based modelling is a method for studying complex systems like economies, societies, ecologies etc. Due to their complexity, very often mathematical analysis is limited in its ability to analyse such systems. In this case, agent-based modelling offers a practical, constructive method of analysis. The objective of this book is to shed light on some emergent properties of multi-agent systems. The authors focus their investigation on the effect of knowledge exchange on the convergence of complex, multi-agent systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The supply chain can be a source of competitive advantage for the firm. Simulation is an effective tool for investigating supply chain problems. The three main simulation approaches in the supply chain context are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). A sample from the literature suggests that whilst SD and ABM have been used to address strategic and planning problems, DES has mainly been used on planning and operational problems., A review of received wisdom suggests that historically, driven by custom and practice, certain simulation techniques have been focused on certain problem types. A theoretical review of the techniques, however, suggests that the scope of their application should be much wider and that supply chain practitioners could benefit from applying them in this broader way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A közgazdaságtanban az ágensalapú modellezés egyik alkalmazási területe a makro ökonómia. Ebben a tanulmányban néhány népszerű megtakarítási szabály létét feltételezve adaptív-evolúciós megközelítésben endogén módon próbálunk következtetni e szabályok relatív életképességére. Három különböző típusú ágenst vezetünk be: egy prudens, egy rövidlátó és egy, a permanensjövedelem-elméletnek megfelelően működőt. Rendkívül erős szelekciós nyomás mellett a prudens típus egyértelműen kiszorítja a másik kettőt. A második legéletképesebbnek a rövidlátó típus tűnik, de már közepes szelekciós nyomásnál sem tűnik el egyik típus sem. Szokásos tőkehatékonyság mellett a prudens típus túlzott beruházási tendenciát visz a gazdaságba, és a gazdaság az aranykori megtakarítási rátánál magasabbat ér el. A hitelkorlátok oldása még nagyobb mértékű túlzott beruházáshoz vezethet, a hitelek mennyiségének növekedése mellett a tőketulajdonosok mintegy "kizsákmányoltatják" magukat azokkal, akiknek nincs tőkejövedelmük. A hosszú távú átlagos fogyasztás szempontjából a három típus kiegyensúlyozott aránya adja a legjobb eredményt, ugyanakkor ez jóval nagyobb ingadozással jár, mint amikor csak prudens típusú háztartások léteznek. ____ Agent-based modelling techniques have been employed for some time in macroeconomics. This paper tests some popular saving rules in an adaptive-evolutionary context of looking at their relative survival values. The three types are prudent, short-sighted, and responsive to the permanent-income hypothesis. It is found that where selection pressure is very high, only the prudent type persists. The second most resilient seems to be the short-sighted type, but all three coexist even at medium levels of selection pressure. When the efficiency of capital approaches the level usually assumed in macroeconomics, the prudent type drives the economy towards excessive accumulation of capital, i. e. a long-term savings rate that exceeds the golden rule. If credit constraints are relaxed, this tendency strengthens as credit grows and capital-owners seem to allow themselves to be exploited" by workers. From the angle of average consumption, the best outcome is obtained from a random distribution of types, although this is accompanied by higher volatility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A landfill represents a complex and dynamically evolving structure that can be stochastically perturbed by exogenous factors. Both thermodynamic (equilibrium) and time varying (non-steady state) properties of a landfill are affected by spatially heterogenous and nonlinear subprocesses that combine with constraining initial and boundary conditions arising from the associated surroundings. While multiple approaches have been made to model landfill statistics by incorporating spatially dependent parameters on the one hand (data based approach) and continuum dynamical mass-balance equations on the other (equation based modelling), practically no attempt has been made to amalgamate these two approaches while also incorporating inherent stochastically induced fluctuations affecting the process overall. In this article, we will implement a minimalist scheme of modelling the time evolution of a realistic three dimensional landfill through a reaction-diffusion based approach, focusing on the coupled interactions of four key variables - solid mass density, hydrolysed mass density, acetogenic mass density and methanogenic mass density, that themselves are stochastically affected by fluctuations, coupled with diffusive relaxation of the individual densities, in ambient surroundings. Our results indicate that close to the linearly stable limit, the large time steady state properties, arising out of a series of complex coupled interactions between the stochastically driven variables, are scarcely affected by the biochemical growth-decay statistics. Our results clearly show that an equilibrium landfill structure is primarily determined by the solid and hydrolysed mass densities only rendering the other variables as statistically "irrelevant" in this (large time) asymptotic limit. The other major implication of incorporation of stochasticity in the landfill evolution dynamics is in the hugely reduced production times of the plants that are now approximately 20-30 years instead of the previous deterministic model predictions of 50 years and above. The predictions from this stochastic model are in conformity with available experimental observations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper adopts a sales resource management (SRM) framework to provide guidance on how to develop effective salespeople via sales training. SRM can be used to identify the individual training needs based on the individual-based modelling data. The individual-based modelling data can also be used to evaluate the outcome of sales training. This paper also gives some suggestions on the forms of sales training which are most likely to develop effective salespeople. © 2010 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several modern-day cooling applications require the incorporation of mini/micro-channel shear-driven flow condensers. There are several design challenges that need to be overcome in order to meet those requirements. The difficulty in developing effective design tools for shear-driven flow condensers is exacerbated due to the lack of a bridge between the physics-based modelling of condensing flows and the current, popular approach based on semi-empirical heat transfer correlations. One of the primary contributors of this disconnect is a lack of understanding caused by the fact that typical heat transfer correlations eliminate the dependence of the heat transfer coefficient on the method of cooling employed on the condenser surface when it may very well not be the case. This is in direct contrast to direct physics-based modeling approaches where the thermal boundary conditions have a direct and huge impact on the heat transfer coefficient values. Typical heat transfer correlations instead introduce vapor quality as one of the variables on which the value of the heat transfer coefficient depends. This study shows how, under certain conditions, a heat transfer correlation from direct physics-based modeling can be equivalent to typical engineering heat transfer correlations without making the same apriori assumptions. Another huge factor that raises doubts on the validity of the heat-transfer correlations is the opacity associated with the application of flow regime maps for internal condensing flows. It is well known that flow regimes influence heat transfer rates strongly. However, several heat transfer correlations ignore flow regimes entirely and present a single heat transfer correlation for all flow regimes. This is believed to be inaccurate since one would expect significant differences in the heat transfer correlations for different flow regimes. Several other studies present a heat transfer correlation for a particular flow regime - however, they ignore the method by which extents of the flow regime is established. This thesis provides a definitive answer (in the context of stratified/annular flows) to: (i) whether a heat transfer correlation can always be independent of the thermal boundary condition and represented as a function of vapor quality, and (ii) whether a heat transfer correlation can be independently obtained for a flow regime without knowing the flow regime boundary (even if the flow regime boundary is represented through a separate and independent correlation). To obtain the results required to arrive at an answer to these questions, this study uses two numerical simulation tools - the approximate but highly efficient Quasi-1D simulation tool and the exact but more expensive 2D Steady Simulation tool. Using these tools and the approximate values of flow regime transitions, a deeper understanding of the current state of knowledge in flow regime maps and heat transfer correlations in shear-driven internal condensing flows is obtained. The ideas presented here can be extended for other flow regimes of shear-driven flows as well. Analogous correlations can also be obtained for internal condensers in the gravity-driven and mixed-driven configuration.