934 resultados para Single-commodity capacitated network design problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’Internet Physique (IP) est une initiative qui identifie plusieurs symptômes d’inefficacité et non-durabilité des systèmes logistiques et les traite en proposant un nouveau paradigme appelé logistique hyperconnectée. Semblable à l’Internet Digital, qui relie des milliers de réseaux d’ordinateurs personnels et locaux, IP permettra de relier les systèmes logistiques fragmentés actuels. Le but principal étant d’améliorer la performance des systèmes logistiques des points de vue économique, environnemental et social. Se concentrant spécifiquement sur les systèmes de distribution, cette thèse remet en question l’ordre de magnitude du gain de performances en exploitant la distribution hyperconnectée habilitée par IP. Elle concerne également la caractérisation de la planification de la distribution hyperconnectée. Pour répondre à la première question, une approche de la recherche exploratoire basée sur la modélisation de l’optimisation est appliquée, où les systèmes de distribution actuels et potentiels sont modélisés. Ensuite, un ensemble d’échantillons d’affaires réalistes sont créé, et leurs performances économique et environnementale sont évaluées en ciblant de multiples performances sociales. Un cadre conceptuel de planification, incluant la modélisation mathématique est proposé pour l’aide à la prise de décision dans des systèmes de distribution hyperconnectée. Partant des résultats obtenus par notre étude, nous avons démontré qu’un gain substantiel peut être obtenu en migrant vers la distribution hyperconnectée. Nous avons également démontré que l’ampleur du gain varie en fonction des caractéristiques des activités et des performances sociales ciblées. Puisque l’Internet physique est un sujet nouveau, le Chapitre 1 présente brièvement l’IP et hyper connectivité. Le Chapitre 2 discute les fondements, l’objectif et la méthodologie de la recherche. Les défis relevés au cours de cette recherche sont décrits et le type de contributions visés est mis en évidence. Le Chapitre 3 présente les modèles d’optimisation. Influencés par les caractéristiques des systèmes de distribution actuels et potentiels, trois modèles fondés sur le système de distribution sont développés. Chapitre 4 traite la caractérisation des échantillons d’affaires ainsi que la modélisation et le calibrage des paramètres employés dans les modèles. Les résultats de la recherche exploratoire sont présentés au Chapitre 5. Le Chapitre 6 décrit le cadre conceptuel de planification de la distribution hyperconnectée. Le chapitre 7 résume le contenu de la thèse et met en évidence les contributions principales. En outre, il identifie les limites de la recherche et les avenues potentielles de recherches futures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acompanha: Dupla-hélice: a construção de um conhecimento

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Well-designed marine protected area (MPA) networks can deliver a range of ecological, economic and social benefits, and so a great deal of research has focused on developing spatial conservation prioritization tools to help identify important areas. However, whilst these software tools are designed to identify MPA networks that both represent biodiversity and minimize impacts on stakeholders, they do not consider complex ecological processes. Thus, it is difficult to determine the impacts that proposed MPAs could have on marine ecosystem health, fisheries and fisheries sustainability. Using the eastern English Channel as a case study, this paper explores an approach to address these issues by identifying a series of MPA networks using the Marxan and Marxan with Zones conservation planning software and linking them with a spatially explicit ecosystem model developed in Ecopath with Ecosim. We then use these to investigate potential trade-offs associated with adopting different MPA management strategies. Limited-take MPAs, which restrict the use of some fishing gears, could have positive benefits for conservation and fisheries in the eastern English Channel, even though they generally receive far less attention in research on MPA network design. Our findings, however, also clearly indicate that no-take MPAs should form an integral component of proposed MPA networks in the eastern English Channel, as they not only result in substantial increases in ecosystem biomass, fisheries catches and the biomass of commercially valuable target species, but are fundamental to maintaining the sustainability of the fisheries. Synthesis and applications. Using the existing software tools Marxan with Zones and Ecopath with Ecosim in combination provides a powerful policy-screening approach. This could help inform marine spatial planning by identifying potential conflicts and by designing new regulations that better balance conservation objectives and stakeholder interests. In addition, it highlights that appropriate combinations of no-take and limited-take marine protected areas might be the most effective when making trade-offs between long-term ecological benefits and short-term political acceptability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Marine ecosystems are facing a diverse range of threats, including climate change, prompting international efforts to safeguard marine biodiversity through the use of spatial management measures. Marine Protected Areas (MPAs) have been implemented as a conservation tool throughout the world, but their usefulness and effectiveness is strongly related to climate change. However, few MPA programmes have directly considered climate change in the design, management or monitoring of an MPA network. Under international obligations, EU, UK and national targets, Scotland has developed an MPA network that aims to protect marine biodiversity and contribute to the vision of a clean, healthy and productive marine environment. This is the first study to critically analyse the Scottish MPA process and highlight areas which may be improved upon in further iterations of the network in the context of climate change. Initially, a critical review of the Scottish MPA process considered how ecological principles for MPA network design were incorporated into the process, how stakeholder perceptions were considered and crucially what consideration was given to the influence of climate change on the eventual effectiveness of the network. The results indicated that to make a meaningful contribution to marine biodiversity protection for Europe the Scottish MPA network should: i) fully adopt best practice ecological principles ii) ensure effective protection and iii) explicitly consider climate change in the management, monitoring and future iterations of the network. However, this review also highlighted the difficulties of incorporating considerations of climate change into an already complex process. A series of international case studies from British Columbia, Canada; central California, USA; the Great Barrier Reef, Australia and the Hauraki Gulf, New Zealand, were then conducted to investigate perceptions of how climate change has been considered in the design, implementation, management and monitoring of MPAs. The key lessons from this study included: i) strictly protected marine reserves are considered essential for climate change resilience and will be necessary as scientific reference sites to understand climate change effects ii) adaptive management of MPA networks is important but hard to implement iii) strictly protected reserves managed as ecosystems are the best option for an uncertain future. This work provides new insights into the policy and practical challenges MPA managers face under climate change scenarios. Based on the Scottish and international studies, the need to facilitate clear communication between academics, policy makers and stakeholders was recognised in order to progress MPA policy delivery and to ensure decisions were jointly formed and acceptable. A Delphi technique was used to develop a series of recommendations for considering climate change in Scotland’s MPA process. The Delphi participant panel was selected for their knowledge of the Scottish MPA process and included stakeholders, policy makers and academics with expertise in MPA research. The results from the first round of the Delphi technique suggested that differing views of success would likely influence opinions regarding required management of MPAs, and in turn, the data requirements to support management action decisions. The second round of the Delphi technique explored this further and indicated that there was a fundamental dichotomy in panellists’ views of a successful MPA network depending upon whether they believed the MPAs should be strictly protected or allow for sustainable use. A third, focus group round of the Delphi Technique developed a feature-based management scenario matrix to aid in deciding upon management actions in light of changes occurring in the MPA network. This thesis highlights that if the Scottish MPA network is to fulfil objectives of conservation and restoration, the implications of climate change for the design, management and monitoring of the network must be considered. In particular, there needs to be a greater focus on: i) incorporating ecological principles that directly address climate change ii) effective protection that builds resilience of the marine and linked social environment iii) developing a focused, strong and adaptable monitoring framework iv) ensuring mechanisms for adaptive management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy efficient policies are being applied to network protocols, devices and classical network management systems. Researchers have already studied in depth each of those fields, including for instance a long monitoring processes of various number of individual ICT equipment from where power models are constructed. With the development of smart meters and emerging protocols such as SNMP and NETCONF, currently there is an open field to couple the power models, translated to the expected behavior, with the realtime energy measurements. The goal is to derive a comparison on the power data between both of the processes in the direction of detection for possible deviations on the expected results. The logical assumption is that a fault in the usage of a particular device will not only increase its own energy usage, but also may cause additional consumption on the other devices part of the network. A platform is developed to monitor and analyze the retrieved power data of a simulated enterprise ICT infrastructure. Moreover, smart algorithms are developed which are aware of the different states that are occurring on each device during their typical use phase, as well as to detect and isolate possible anomalies. The produced results are obtained and validated with the use of Cisco switches and routers, Dell Precision stations and Raritan PDU as part of the monitored infrastructure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La necesidad cotidiana de los ciudadanos de desplazarse para realizar diferentes actividades, sea cual fuere su naturaleza, se ha visto afectada en gran medida por los cambios producidos. Las ventajas generadas por la inclusión de la bicicleta como modo de transporte y la proliferación de su uso entre la ciudadanía son innumerables y se extienden tanto en el ámbito de la movilidad urbana como del desarrollo sostenible. En la actualidad, hay multitud de programas para la implantación, fomento o aumento de la participación ciudadana relacionado con la bicicleta en las ciudades. Pero en definitiva, todos y cada uno de estas iniciativas tienen la misma finalidad, crear una malla de vías cicladles eficaz y útil. Capaces de permitir el uso de la bicicleta en vías preferentes con unas garantías de seguridad altas, incorporando la bicicleta en el modelo de intermodalidad del transporte urbano. Con la progresiva implantación del carril bici, muchas personas han empezado a utilizarlas para moverse por la ciudad. Pero todo lo nuevo necesita un periodo de adaptación. Y, la realidad es que la red de viales destinados para estos vehículos está repleta de obstáculos para el ciclista. La actual situación ha llevado a cuestionar qué cantidad de kilómetros de carriles bici son necesarios para abastecer la demanda existente de este modo de transporte y, si las obras ejecutadas y proyectadas son las correctas y suficientes. En este trabajo se presenta una herramienta, basada en un modelo de programación matemática, para el diseño óptimo de una red destinada a los ciclistas. En concreto, el sistema determina una infraestructura para la bicicleta adaptada a las características de la red de carreteras existentes, con base en criterios de teoría de grafos ponderados. Como una aplicación del modelo propuesto, se ofrece el resultado de estos experimentos, obteniéndose un número de conclusiones útiles para la planificación y el diseño de redes de carriles bici desde una perspectiva social. Se realiza una aplicación de la metodología desarrollada para el caso real del municipio de Málaga (España). Por último se produce la validación del modelo de optimización presentado y la repercusión que tiene éste sobre el resultado final y la importancia o el peso del total de variables capaces de condicionar el resultado final de la red ciclista. Se obtiene, por tanto, una herramienta destinada a la mejora de la planificación, diseño y gestión de las diferentes infraestructuras para la bicicleta, con capacidad de interactuar con el modelo de red vial actual y con el resto de los modos de transportes existentes en el entramado urbano de las ciudades.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper a Variable Neighborhood Search (VNS) algorithm for solving the Capacitated Single Allocation Hub Location Problem (CSAHLP) is presented. CSAHLP consists of two subproblems; the first is choosing a set of hubs from all nodes in a network, while the other comprises finding the optimal allocation of non-hubs to hubs when a set of hubs is already known. The VNS algorithm was used for the first subproblem, while the CPLEX solver was used for the second. Computational results demonstrate that the proposed algorithm has reached optimal solutions on all 20 test instances for which optimal solutions are known, and this in short computational time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background. Several medical devices used during hemodynamic procedures, particularly angiographic diagnostic and therapeutic cardiac catheters, are manufactured for single use only. However, reprocessing and reuse of these devices has been reported, to determine the frequency of reuse and reprocessing of single-use medical devices used during hemodynamic procedures in Brazil and to evaluate how reprocessing is performed. Design. National survey, conducted from December 1999 to July 2001. Methods. Most of the institutions affiliated with the Brazilian Society of Hemodynamic and Interventional Cardiology were surveyed by use of a questionnaire sent in the mail. Results. The questionnaire response rate was 50% (119 of 240 institutions). Of the 119 institutions that responded, 116 (97%) reported reuse of single-use devices used during hemodynamic procedures, and only 26 (22%) reported use of a standardized reprocessing protocol. Cleaning, flushing, rinsing, drying, sterilizing and packaging methods varied greatly and were mostly inadequate. Criteria for discarding reused devices varied widely. Of the 119 institutions that responded, 80 (67%) reported having a surveillance system for adverse events associated with the reuse of medical devices, although most of these institutions did not routinely review the data, and only 38 (32%) described a training program for the personnel who reprocessed single-use devices. Conclusions. The reuse of single-use devices used during hemodynamic procedures was very frequent in hospitals in Brazil. Basic guidance on how to reuse and reprocess single-use medical devices is urgently needed, because, despite the lack of studies to support reusing and reprocessing single-use medical devices, such devices are necessary in limited-resource areas in which these practices are current.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Release of chloroethene compounds into the environment often results in groundwater contamination, which puts people at risk of exposure by drinking contaminated water. cDCE (cis-1,2-dichloroethene) accumulation on subsurface environments is a common environmental problem due to stagnation and partial degradation of other precursor chloroethene species. Polaromonas sp. strain JS666 apparently requires no exotic growth factors to be used as a bioaugmentation agent for aerobic cDCE degradation. Although being the only suitable microorganism found capable of such, further studies are needed for improving the intrinsic bioremediation rates and fully comprehend the metabolic processes involved. In order to do so, a metabolic model, iJS666, was reconstructed from genome annotation and available bibliographic data. FVA (Flux Variability Analysis) and FBA (Flux Balance Analysis) techniques were used to satisfactory validate the predictive capabilities of the iJS666 model. The iJS666 model was able to predict biomass growth for different previously tested conditions, allowed to design key experiments which should be done for further model improvement and, also, produced viable predictions for the use of biostimulant metabolites in the cDCE biodegradation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.