950 resultados para Complex combinatorial problem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the Oil field exploration and exploitation, the problem of supervention and enhaning combination gas recovery was faced.then proposing new and higher demands to precision of seismic data. On the basis of studying exploration status,resource potential,and quality of 3D seismic data to internal representative mature Oil field, taking shengli field ken71 zone as study object, this paper takes advantage of high-density 3D seismic technique to solving the complex geologic problem in exploration and development of mature region, deep into researching the acquisition, processing of high-density 3D seismic data. This disseration study the function of routine 3D seismic, high-density 3D seismic, 3D VSP seismic,and multi-wave multi-component seismic to solving the geologic problem in exploration and development of mature region,particular introduce the advantage and shortage of high-density 3D seismic exploration, put forward the integrated study method of giving priority to high-density 3D seismic and combining other seismic data in enhancing exploration accuracy of mature region. On the basis of detailedly studying acquisition method of high-density 3D seismic and 3D VSP seismic,aming at developing physical simulation and numeical simulation to designing and optimizing observation system. Optimizing “four combination” whole acquisition method of acquisition of well with ground seimic and “three synchron”technique, realizing acquisition of combining P-wave with S-wave, acquisition of combining digit geophone with simulation geophone, acquisition of 3D VSP seismic with ground seimic, acquisition of combining interborehole seismic,implementing synchron acceptance of aboveground equipment and downhole instrument, common use and synchron acceptance of 3D VSP and ground shots, synchron acquisition of high-density P-wave and high-density multi-wave, achieve high quality magnanimity seismic data. On the basis of detailedly analysising the simulation geophone data of high-density acquisition ,adopting pertinency processing technique to protecting amplitude,studying the justice matching of S/N and resolution to improving resolution of seismic profile ,using poststack series connection migration,prestack time migration and prestack depth migration to putting up high precision imaging,gained reliable high resolution data.At the same time carrying along high accuracy exploration to high-density digit geophone data, obtaining good improve in its resolution, fidelity, break point clear degree, interbed information, formation characteristics and so on.Comparing processing results ,we may see simulation geophone high-density acquisition and high precision imaging can enhancing resolution, high-density seismic basing on digit geophone can better solve subsurface geology problem. At the same time, fine processing converted wave of synchron acquisition and 3D VSP seismic data,acquiring good result. On the basis of high-density seismic data acquisition and high-density seismic data processing, carry through high precision structure interpretation and inversion, and preliminary interpretation analysis to 3D VSP seismic data and multi-wave multi-component seismic data. High precision interpretation indicates after high resolution processing ,structural diagram obtaining from high-density seismic data better accord with true geoligy situation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is in the interests of everybody that the environment is protected. In view of the recent leaps in environmental awareness it would seem timely and sensible, therefore, for people to pool vehicle resources to minimise the damaging impact of emissions. However, this is often contrary to how complex social systems behave – local decisions made by self-interested individuals often have emergent effects that are in the interests of nobody. For software engineers a major challenge is to help facilitate individual decision-making such that individual preferences can be met, which, when accumulated, minimise adverse effects at the level of the transport system. We introduce this general problem through a concrete example based on vehicle-sharing. Firstly, we outline the kind of complex transportation problem that is directly addressed by our technology (CO2y™ - pronounced “cosy”), and also show how this differs from other more basic software solutions. The CO2y™ architecture is then briefly introduced. We outline the practical advantages of the advanced, intelligent software technology that is designed to satisfy a number of individual preference criteria and thereby find appropriate matches within a population of vehicle-share users. An example scenario of use is put forward, i.e., minimisation of grey-fleets within a medium-sized company. Here we comment on some of the underlying assumptions of the scenario, and how in a detailed real-world situation such assumptions might differ between different companies, and individual users. Finally, we summarise the paper, and conclude by outlining how the problem of pooled transportation is likely to benefit from the further application of emergent, nature-inspired computing technologies. These technologies allow systems-level behaviour to be optimised with explicit representation of individual actors. With these techniques we hope to make real progress in facing the complexity challenges that transportation problems produce.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Olusanya, O. (2004). Double Jeopardy Without Parameters: Re-characterization in International Criminal Law. Series Supranational Criminal Law: Capita Selecta, volume 2. Antwerp: Intersentia. RAE2008

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Refined vegetable oils are widely used in the food industry as ingredients or components in many processed food products in the form of oil blends. To date, the generic term 'vegetable oil' has been used in the labelling of food containing oil blends. With the introduction of new EU Regulation for Food Information (1169/2011) due to take effect in 2014, the oil species used must be clearly identified on the package and there is a need for development of fit for purpose methodology for industry and regulators alike to verify the oil species present in a product. The available methodologies that may be employed to authenticate the botanical origin of a vegetable oil admixture were reviewed and evaluated. The majority of the sources however, described techniques applied to crude vegetable oils such as olive oil due to the lack of refined vegetable oil focused studies. Nevertheless, DNA based typing methods and stable isotopes procedures were found not suitable for this particular purpose due to several issues. Only a small number of specific chromatographic and spectroscopic fingerprinting methods in either targeted or untargeted mode were found to be applicable in potentially providing a solution to this complex authenticity problem. Applied as a single method in isolation, these techniques would be able to give limited information on the oils identity as signals obtained for various oil types may well be overlapping. Therefore, more complex and combined approaches are likely to be needed to identify the oil species present in oil blends employing a stepwise approach in combination with advanced chemometrics. Options to provide such a methodology are outlined in the current study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The optimal power flow problem has been widely studied in order to improve power systems operation and planning. For real power systems, the problem is formulated as a non-linear and as a large combinatorial problem. The first approaches used to solve this problem were based on mathematical methods which required huge computational efforts. Lately, artificial intelligence techniques, such as metaheuristics based on biological processes, were adopted. Metaheuristics require lower computational resources, which is a clear advantage for addressing the problem in large power systems. This paper proposes a methodology to solve optimal power flow on economic dispatch context using a Simulated Annealing algorithm inspired on the cooling temperature process seen in metallurgy. The main contribution of the proposed method is the specific neighborhood generation according to the optimal power flow problem characteristics. The proposed methodology has been tested with IEEE 6 bus and 30 bus networks. The obtained results are compared with other wellknown methodologies presented in the literature, showing the effectiveness of the proposed method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chronic low back pain (CLBP) is a complex health problem of psychological manifestations not fully understood. Using interpretive phenomenological analysis, 11 semi-structured interviews were conducted to help understand the meaning of the lived experience of CLBP; focusing on the psychological response to pain and the role of depression, catastrophizing, fear-avoidance behavior, anxiety and somatization. Participants characterized CLBP as persistent tolerable low back pain (TLBP) interrupted by periods of intolerable low back pain (ILBP). ILBP contributed to recurring bouts of helplessness, depression, frustration with the medical system and increased fear based on the perceived consequences of anticipated recurrences, all of which were mediated by the uncertainty of such pain. During times of TLBP all participants pursued a permanent pain consciousness as they felt susceptible to experience a recurrence. As CLBP progressed, participants felt they were living with a weakness, became isolated from those without CLBP and integrated pain into their self-concept.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le projet de recherche porte sur l'étude des problèmes de conception et de planification d'un réseau optique de longue distance, aussi appelé réseau de coeur (OWAN-Optical Wide Area Network en anglais). Il s'agit d'un réseau qui transporte des flots agrégés en mode commutation de circuits. Un réseau OWAN relie différents sites à l'aide de fibres optiques connectées par des commutateurs/routeurs optiques et/ou électriques. Un réseau OWAN est maillé à l'échelle d'un pays ou d’un continent et permet le transit des données à très haut débit. Dans une première partie du projet de thèse, nous nous intéressons au problème de conception de réseaux optiques agiles. Le problème d'agilité est motivé par la croissance de la demande en bande passante et par la nature dynamique du trafic. Les équipements déployés par les opérateurs de réseaux doivent disposer d'outils de configuration plus performants et plus flexibles pour gérer au mieux la complexité des connexions entre les clients et tenir compte de la nature évolutive du trafic. Souvent, le problème de conception d'un réseau consiste à prévoir la bande passante nécessaire pour écouler un trafic donné. Ici, nous cherchons en plus à choisir la meilleure configuration nodale ayant un niveau d'agilité capable de garantir une affectation optimale des ressources du réseau. Nous étudierons également deux autres types de problèmes auxquels un opérateur de réseau est confronté. Le premier problème est l'affectation de ressources du réseau. Une fois que l'architecture du réseau en termes d'équipements est choisie, la question qui reste est de savoir : comment dimensionner et optimiser cette architecture pour qu'elle rencontre le meilleur niveau possible d'agilité pour satisfaire toute la demande. La définition de la topologie de routage est un problème d'optimisation complexe. Elle consiste à définir un ensemble de chemins optiques logiques, choisir les routes physiques suivies par ces derniers, ainsi que les longueurs d'onde qu'ils utilisent, de manière à optimiser la qualité de la solution obtenue par rapport à un ensemble de métriques pour mesurer la performance du réseau. De plus, nous devons définir la meilleure stratégie de dimensionnement du réseau de façon à ce qu'elle soit adaptée à la nature dynamique du trafic. Le second problème est celui d'optimiser les coûts d'investissement en capital(CAPEX) et d'opération (OPEX) de l'architecture de transport proposée. Dans le cas du type d'architecture de dimensionnement considérée dans cette thèse, le CAPEX inclut les coûts de routage, d'installation et de mise en service de tous les équipements de type réseau installés aux extrémités des connexions et dans les noeuds intermédiaires. Les coûts d'opération OPEX correspondent à tous les frais liés à l'exploitation du réseau de transport. Étant donné la nature symétrique et le nombre exponentiel de variables dans la plupart des formulations mathématiques développées pour ces types de problèmes, nous avons particulièrement exploré des approches de résolution de type génération de colonnes et algorithme glouton qui s'adaptent bien à la résolution des grands problèmes d'optimisation. Une étude comparative de plusieurs stratégies d'allocation de ressources et d'algorithmes de résolution, sur différents jeux de données et de réseaux de transport de type OWAN démontre que le meilleur coût réseau est obtenu dans deux cas : une stratégie de dimensionnement anticipative combinée avec une méthode de résolution de type génération de colonnes dans les cas où nous autorisons/interdisons le dérangement des connexions déjà établies. Aussi, une bonne répartition de l'utilisation des ressources du réseau est observée avec les scénarios utilisant une stratégie de dimensionnement myope combinée à une approche d'allocation de ressources avec une résolution utilisant les techniques de génération de colonnes. Les résultats obtenus à l'issue de ces travaux ont également démontré que des gains considérables sont possibles pour les coûts d'investissement en capital et d'opération. En effet, une répartition intelligente et hétérogène de ressources d’un réseau sur l'ensemble des noeuds permet de réaliser une réduction substantielle des coûts du réseau par rapport à une solution d'allocation de ressources classique qui adopte une architecture homogène utilisant la même configuration nodale dans tous les noeuds. En effet, nous avons démontré qu'il est possible de réduire le nombre de commutateurs photoniques tout en satisfaisant la demande de trafic et en gardant le coût global d'allocation de ressources de réseau inchangé par rapport à l'architecture classique. Cela implique une réduction substantielle des coûts CAPEX et OPEX. Dans nos expériences de calcul, les résultats démontrent que la réduction de coûts peut atteindre jusqu'à 65% dans certaines jeux de données et de réseau.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’évolution récente des commutateurs de sélection de longueurs d’onde (WSS -Wavelength Selective Switch) favorise le développement du multiplexeur optique d’insertionextraction reconfigurable (ROADM - Reconfigurable Optical Add/Drop Multiplexers) à plusieurs degrés sans orientation ni coloration, considéré comme un équipement fort prometteur pour les réseaux maillés du futur relativement au multiplexage en longueur d’onde (WDM -Wavelength Division Multiplexing ). Cependant, leur propriété de commutation asymétrique complique la question de l’acheminement et de l’attribution des longueur d’ondes (RWA - Routing andWavelength Assignment). Or la plupart des algorithmes de RWA existants ne tiennent pas compte de cette propriété d’asymétrie. L’interruption des services causée par des défauts d’équipements sur les chemins optiques (résultat provenant de la résolution du problème RWA) a pour conséquence la perte d’une grande quantité de données. Les recherches deviennent ainsi incontournables afin d’assurer la survie fonctionnelle des réseaux optiques, à savoir, le maintien des services, en particulier en cas de pannes d’équipement. La plupart des publications antérieures portaient particulièrement sur l’utilisation d’un système de protection permettant de garantir le reroutage du trafic en cas d’un défaut d’un lien. Cependant, la conception de la protection contre le défaut d’un lien ne s’avère pas toujours suffisante en termes de survie des réseaux WDM à partir de nombreux cas des autres types de pannes devenant courant de nos jours, tels que les bris d’équipements, les pannes de deux ou trois liens, etc. En outre, il y a des défis considérables pour protéger les grands réseaux optiques multidomaines composés de réseaux associés à un domaine simple, interconnectés par des liens interdomaines, où les détails topologiques internes d’un domaine ne sont généralement pas partagés à l’extérieur. La présente thèse a pour objectif de proposer des modèles d’optimisation de grande taille et des solutions aux problèmes mentionnés ci-dessus. Ces modèles-ci permettent de générer des solutions optimales ou quasi-optimales avec des écarts d’optimalité mathématiquement prouvée. Pour ce faire, nous avons recours à la technique de génération de colonnes afin de résoudre les problèmes inhérents à la programmation linéaire de grande envergure. Concernant la question de l’approvisionnement dans les réseaux optiques, nous proposons un nouveau modèle de programmation linéaire en nombres entiers (ILP - Integer Linear Programming) au problème RWA afin de maximiser le nombre de requêtes acceptées (GoS - Grade of Service). Le modèle résultant constitue celui de l’optimisation d’un ILP de grande taille, ce qui permet d’obtenir la solution exacte des instances RWA assez grandes, en supposant que tous les noeuds soient asymétriques et accompagnés d’une matrice de connectivité de commutation donnée. Ensuite, nous modifions le modèle et proposons une solution au problème RWA afin de trouver la meilleure matrice de commutation pour un nombre donné de ports et de connexions de commutation, tout en satisfaisant/maximisant la qualité d’écoulement du trafic GoS. Relativement à la protection des réseaux d’un domaine simple, nous proposons des solutions favorisant la protection contre les pannes multiples. En effet, nous développons la protection d’un réseau d’un domaine simple contre des pannes multiples, en utilisant les p-cycles de protection avec un chemin indépendant des pannes (FIPP - Failure Independent Path Protecting) et de la protection avec un chemin dépendant des pannes (FDPP - Failure Dependent Path-Protecting). Nous proposons ensuite une nouvelle formulation en termes de modèles de flots pour les p-cycles FDPP soumis à des pannes multiples. Le nouveau modèle soulève un problème de taille, qui a un nombre exponentiel de contraintes en raison de certaines contraintes d’élimination de sous-tour. Par conséquent, afin de résoudre efficacement ce problème, on examine : (i) une décomposition hiérarchique du problème auxiliaire dans le modèle de décomposition, (ii) des heuristiques pour gérer efficacement le grand nombre de contraintes. À propos de la protection dans les réseaux multidomaines, nous proposons des systèmes de protection contre les pannes d’un lien. Tout d’abord, un modèle d’optimisation est proposé pour un système de protection centralisée, en supposant que la gestion du réseau soit au courant de tous les détails des topologies physiques des domaines. Nous proposons ensuite un modèle distribué de l’optimisation de la protection dans les réseaux optiques multidomaines, une formulation beaucoup plus réaliste car elle est basée sur l’hypothèse d’une gestion de réseau distribué. Ensuite, nous ajoutons une bande pasiv sante partagée afin de réduire le coût de la protection. Plus précisément, la bande passante de chaque lien intra-domaine est partagée entre les p-cycles FIPP et les p-cycles dans une première étude, puis entre les chemins pour lien/chemin de protection dans une deuxième étude. Enfin, nous recommandons des stratégies parallèles aux solutions de grands réseaux optiques multidomaines. Les résultats de l’étude permettent d’élaborer une conception efficace d’un système de protection pour un très large réseau multidomaine (45 domaines), le plus large examiné dans la littérature, avec un système à la fois centralisé et distribué.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Requirements Engineering (RE) is a commencing phase in the systems development life cycle and concerned with understanding and specifying the customer's requirements. RE has been recognized as a complex cognitive problem solving process which takes place in an unstructured and poorly understood problem .context. A recent understanding describes the RE process as inherently creative, involving cycles of incremental building followed by insight-driven econceptualization .of the problem space. This chapter relates this new understanding to various creative process models described in the creativity and psychology of problem solving literature.

A review of current attempts to support problem solving in RE using
various design rationale approaches suggests., that their common major
wealmess lies in the lack of support for the creative and insight-driven problem solving process in RE. In addressing this weakness, the chapter suggests a new approach to promoting and supporting RE creativity using design rationale. The suggested approach involves the ad hoc recording of rationale to support the creative exploration complemented by a post hoc conceptual characterization of the problem space to support insight driven reconceptualization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We examine efficient computer implementation of one method of deterministic global optimisation, the cutting angle method. In this method the objective function is approximated from values below the function with a piecewise linear auxiliary function. The global minimum of the objective function is approximated from the sequence of minima of this auxiliary function. Computing the minima of the auxiliary function is a combinatorial problem, and we show that it can be effectively parallelised. We discuss the improvements made to the serial implementation of the cutting angle method, and ways of distributing computations across multiple processors on parallel and cluster computers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Investigation of the role of hypothesis formation in complex (business) problem solving has resulted in a new approach to hypothesis generation. A prototypical hypothesis generation paradigm for management intelligence has been developed, reflecting a widespread need to support management in such areas as fraud detection and intelligent decision analysis. This dissertation presents this new paradigm and its application to goal directed problem solving methodologies, including case based reasoning. The hypothesis generation model, which is supported by a dynamic hypothesis space, consists of three components, namely, Anomaly Detection, Abductive Reasoning, and Conflict Resolution models. Anomaly detection activates the hypothesis generation model by scanning anomalous data and relations in its working environment. The respective heuristics are activated by initial indications of anomalous behaviour based on evidence from historical patterns, linkages with other cases, inconsistencies, etc. Abductive reasoning, as implemented in this paradigm, is based on joining conceptual graphs, and provides an inference process that can incorporate a new observation into a world model by determining what assumptions should be added to the world, so that it can explain new observations. Abductive inference is a weak mechanism for generating explanation and hypothesis. Although a practical conclusion cannot be guaranteed, the cues provided by the inference are very beneficial. Conflict resolution is crucial for the evaluation of explanations, especially those generated by a weak (abduction) mechanism.The measurements developed in this research for explanation and hypothesis provide an indirect way of estimating the ‘quality’ of an explanation for given evidence. Such methods are realistic for complex domains such as fraud detection, where the prevailing hypothesis may not always be relevant to the new evidence. In order to survive in rapidly changing environments, it is necessary to bridge the gap that exists between the system’s view of the world and reality.Our research has demonstrated the value of Case-Based Interaction, which utilises an hypothesis structure for the representation of relevant planning and strategic knowledge. Under, the guidance of case based interaction, users are active agents empowered by system knowledge, and the system acquires its auxiliary information/knowledge from this external source. Case studies using the new paradigm and drawn from the insurance industry have attracted wide interest. A prototypical system of fraud detection for motor vehicle insurance based on an hypothesis guided problem solving mechanism is now under commercial development. The initial feedback from claims managers is promising.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is devoted to a combinatorial problem for incidence semirings, which can be viewed as sets of polynomials over graphs, where the edges are the unknowns and the coefficients are taken from a semiring. The construction of incidence rings is very well known and has many useful applications. The present article is devoted to a novel application of the more general incidence semirings. Recent research on data mining has motivated the investigation of the sets of centroids that have largest weights in semiring constructions. These sets are valuable for the design of centroid-based classification systems, or classifiers, as well as for the design of multiple classifiers combining several individual classifiers. Our article gives a complete description of all sets of centroids with the largest weight in incidence semirings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background
Chronic kidney disease (CKD) is a complex health problem, which requires individuals to invest considerable time and energy in managing their health and adhering to multifaceted treatment regimens.

Objectives
To review studies delivering self-management interventions to people with CKD (Stages 1–4) and assess whether these interventions improve patient outcomes.

Design
Systematic review.

Methods
Nine electronic databases (MedLine, CINAHL, EMBASE, ProQuest Health & Medical Complete, ProQuest Nursing & Allied Health, The Cochrane Library, The Joanna Briggs Institute EBP Database, Web of Science and PsycINFO) were searched using relevant terms for papers published between January 2003 and February 2013.

Results
The search strategy identified 2,051 papers, of which 34 were retrieved in full with only 5 studies involving 274 patients meeting the inclusion criteria. Three studies were randomised controlled trials, a variety of methods were used to measure outcomes, and four studies included a nurse on the self-management intervention team. There was little consistency in the delivery, intensity, duration and format of the self-management programmes. There is some evidence that knowledge- and health-related quality of life improved. Generally, small effects were observed for levels of adherence and progression of CKD according to physiologic measures.

Conclusion
The effectiveness of self-management programmes in CKD (Stages 1–4) cannot be conclusively ascertained, and further research is required. It is desirable that individuals with CKD are supported to effectively self-manage day-to-day aspects of their health.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Requirements engineering (RE) often entails interdisciplinary groups of people working together to find novel and valuable solutions to a complex design problem. In such situations RE requires creativity in a form where interactions among stakeholders are particularly important: collaborative creativity. However, few studies have explicitly concentrated on understanding collaborative creativity in RE, resulting in limited advice for practitioners on how to support this aspect of RE. This paper provides a framework of factors characterising collaborative creative processes in RE. These factors enable a systematic investigation of the collaboratively creative nature of RE. They can potentially guide practitioners when facilitating RE efforts, and also provide researchers with ideas on where to focus when developing methods and tools for RE. © 2013 IEEE.