869 resultados para NETWORK DESIGN PROBLEMS
Resumo:
Dry eye is a common yet complex condition. Intrinsic and extrinsic factors can cause dysfunction of the lids, lacrimal glands, meibomian glands, ocular surface cells, or neural network. These problems would ultimately be expressed at the tear film-ocular surface interface. The manifestations of these problems are experienced as symptoms such as grittiness, discomfort, burning sensation, hyperemia, and secondary epiphora in some cases. Accurate investigation of dry eye is crucial to correct management of the condition. Techniques can be classed according to their investigation of tear production, tear stability, and surface damage (including histological tests). The application, validity, reliability, compatibility, protocols, and indications for these are important. The use of a diagnostic algorithm may lead to more accurate diagnosis and management. The lack of correlation between signs and symptoms seems to favor tear film osmolarity, an objective biomarker, as the best current clue to correct diagnosis.
Resumo:
Fibre-optic communications systems have traditionally carried data using binary (on-off) encoding of the light amplitude. However, next-generation systems will use both the amplitude and phase of the optical carrier to achieve higher spectral efficiencies and thus higher overall data capacities(1,2). Although this approach requires highly complex transmitters and receivers, the increased capacity and many further practical benefits that accrue from a full knowledge of the amplitude and phase of the optical field(3) more than outweigh this additional hardware complexity and can greatly simplify optical network design. However, use of the complex optical field gives rise to a new dominant limitation to system performance-nonlinear phase noise(4,5). Developing a device to remove this noise is therefore of great technical importance. Here, we report the development of the first practical ('black-box') all-optical regenerator capable of removing both phase and amplitude noise from binary phase-encoded optical communications signals.
Resumo:
Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.
Resumo:
ATM network optimization problems defined as combinatorial optimization problems are considered. Several approximate algorithms for solving such problems are developed. Results of their comparison by experiments on a set of problems with random input data are presented.
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
L’Internet Physique (IP) est une initiative qui identifie plusieurs symptômes d’inefficacité et non-durabilité des systèmes logistiques et les traite en proposant un nouveau paradigme appelé logistique hyperconnectée. Semblable à l’Internet Digital, qui relie des milliers de réseaux d’ordinateurs personnels et locaux, IP permettra de relier les systèmes logistiques fragmentés actuels. Le but principal étant d’améliorer la performance des systèmes logistiques des points de vue économique, environnemental et social. Se concentrant spécifiquement sur les systèmes de distribution, cette thèse remet en question l’ordre de magnitude du gain de performances en exploitant la distribution hyperconnectée habilitée par IP. Elle concerne également la caractérisation de la planification de la distribution hyperconnectée. Pour répondre à la première question, une approche de la recherche exploratoire basée sur la modélisation de l’optimisation est appliquée, où les systèmes de distribution actuels et potentiels sont modélisés. Ensuite, un ensemble d’échantillons d’affaires réalistes sont créé, et leurs performances économique et environnementale sont évaluées en ciblant de multiples performances sociales. Un cadre conceptuel de planification, incluant la modélisation mathématique est proposé pour l’aide à la prise de décision dans des systèmes de distribution hyperconnectée. Partant des résultats obtenus par notre étude, nous avons démontré qu’un gain substantiel peut être obtenu en migrant vers la distribution hyperconnectée. Nous avons également démontré que l’ampleur du gain varie en fonction des caractéristiques des activités et des performances sociales ciblées. Puisque l’Internet physique est un sujet nouveau, le Chapitre 1 présente brièvement l’IP et hyper connectivité. Le Chapitre 2 discute les fondements, l’objectif et la méthodologie de la recherche. Les défis relevés au cours de cette recherche sont décrits et le type de contributions visés est mis en évidence. Le Chapitre 3 présente les modèles d’optimisation. Influencés par les caractéristiques des systèmes de distribution actuels et potentiels, trois modèles fondés sur le système de distribution sont développés. Chapitre 4 traite la caractérisation des échantillons d’affaires ainsi que la modélisation et le calibrage des paramètres employés dans les modèles. Les résultats de la recherche exploratoire sont présentés au Chapitre 5. Le Chapitre 6 décrit le cadre conceptuel de planification de la distribution hyperconnectée. Le chapitre 7 résume le contenu de la thèse et met en évidence les contributions principales. En outre, il identifie les limites de la recherche et les avenues potentielles de recherches futures.
Resumo:
Well-designed marine protected area (MPA) networks can deliver a range of ecological, economic and social benefits, and so a great deal of research has focused on developing spatial conservation prioritization tools to help identify important areas. However, whilst these software tools are designed to identify MPA networks that both represent biodiversity and minimize impacts on stakeholders, they do not consider complex ecological processes. Thus, it is difficult to determine the impacts that proposed MPAs could have on marine ecosystem health, fisheries and fisheries sustainability. Using the eastern English Channel as a case study, this paper explores an approach to address these issues by identifying a series of MPA networks using the Marxan and Marxan with Zones conservation planning software and linking them with a spatially explicit ecosystem model developed in Ecopath with Ecosim. We then use these to investigate potential trade-offs associated with adopting different MPA management strategies. Limited-take MPAs, which restrict the use of some fishing gears, could have positive benefits for conservation and fisheries in the eastern English Channel, even though they generally receive far less attention in research on MPA network design. Our findings, however, also clearly indicate that no-take MPAs should form an integral component of proposed MPA networks in the eastern English Channel, as they not only result in substantial increases in ecosystem biomass, fisheries catches and the biomass of commercially valuable target species, but are fundamental to maintaining the sustainability of the fisheries. Synthesis and applications. Using the existing software tools Marxan with Zones and Ecopath with Ecosim in combination provides a powerful policy-screening approach. This could help inform marine spatial planning by identifying potential conflicts and by designing new regulations that better balance conservation objectives and stakeholder interests. In addition, it highlights that appropriate combinations of no-take and limited-take marine protected areas might be the most effective when making trade-offs between long-term ecological benefits and short-term political acceptability.
Resumo:
Marine ecosystems are facing a diverse range of threats, including climate change, prompting international efforts to safeguard marine biodiversity through the use of spatial management measures. Marine Protected Areas (MPAs) have been implemented as a conservation tool throughout the world, but their usefulness and effectiveness is strongly related to climate change. However, few MPA programmes have directly considered climate change in the design, management or monitoring of an MPA network. Under international obligations, EU, UK and national targets, Scotland has developed an MPA network that aims to protect marine biodiversity and contribute to the vision of a clean, healthy and productive marine environment. This is the first study to critically analyse the Scottish MPA process and highlight areas which may be improved upon in further iterations of the network in the context of climate change. Initially, a critical review of the Scottish MPA process considered how ecological principles for MPA network design were incorporated into the process, how stakeholder perceptions were considered and crucially what consideration was given to the influence of climate change on the eventual effectiveness of the network. The results indicated that to make a meaningful contribution to marine biodiversity protection for Europe the Scottish MPA network should: i) fully adopt best practice ecological principles ii) ensure effective protection and iii) explicitly consider climate change in the management, monitoring and future iterations of the network. However, this review also highlighted the difficulties of incorporating considerations of climate change into an already complex process. A series of international case studies from British Columbia, Canada; central California, USA; the Great Barrier Reef, Australia and the Hauraki Gulf, New Zealand, were then conducted to investigate perceptions of how climate change has been considered in the design, implementation, management and monitoring of MPAs. The key lessons from this study included: i) strictly protected marine reserves are considered essential for climate change resilience and will be necessary as scientific reference sites to understand climate change effects ii) adaptive management of MPA networks is important but hard to implement iii) strictly protected reserves managed as ecosystems are the best option for an uncertain future. This work provides new insights into the policy and practical challenges MPA managers face under climate change scenarios. Based on the Scottish and international studies, the need to facilitate clear communication between academics, policy makers and stakeholders was recognised in order to progress MPA policy delivery and to ensure decisions were jointly formed and acceptable. A Delphi technique was used to develop a series of recommendations for considering climate change in Scotland’s MPA process. The Delphi participant panel was selected for their knowledge of the Scottish MPA process and included stakeholders, policy makers and academics with expertise in MPA research. The results from the first round of the Delphi technique suggested that differing views of success would likely influence opinions regarding required management of MPAs, and in turn, the data requirements to support management action decisions. The second round of the Delphi technique explored this further and indicated that there was a fundamental dichotomy in panellists’ views of a successful MPA network depending upon whether they believed the MPAs should be strictly protected or allow for sustainable use. A third, focus group round of the Delphi Technique developed a feature-based management scenario matrix to aid in deciding upon management actions in light of changes occurring in the MPA network. This thesis highlights that if the Scottish MPA network is to fulfil objectives of conservation and restoration, the implications of climate change for the design, management and monitoring of the network must be considered. In particular, there needs to be a greater focus on: i) incorporating ecological principles that directly address climate change ii) effective protection that builds resilience of the marine and linked social environment iii) developing a focused, strong and adaptable monitoring framework iv) ensuring mechanisms for adaptive management.
Resumo:
Energy efficient policies are being applied to network protocols, devices and classical network management systems. Researchers have already studied in depth each of those fields, including for instance a long monitoring processes of various number of individual ICT equipment from where power models are constructed. With the development of smart meters and emerging protocols such as SNMP and NETCONF, currently there is an open field to couple the power models, translated to the expected behavior, with the realtime energy measurements. The goal is to derive a comparison on the power data between both of the processes in the direction of detection for possible deviations on the expected results. The logical assumption is that a fault in the usage of a particular device will not only increase its own energy usage, but also may cause additional consumption on the other devices part of the network. A platform is developed to monitor and analyze the retrieved power data of a simulated enterprise ICT infrastructure. Moreover, smart algorithms are developed which are aware of the different states that are occurring on each device during their typical use phase, as well as to detect and isolate possible anomalies. The produced results are obtained and validated with the use of Cisco switches and routers, Dell Precision stations and Raritan PDU as part of the monitored infrastructure.
Resumo:
Automation technologies are widely acclaimed to have the potential to significantly reduce energy consumption and energy-related costs in buildings. However, despite the abundance of commercially available technologies, automation in domestic environments keep on meeting commercial failures. The main reason for this is the development process that is used to build the automation applications, which tend to focus more on technical aspects rather than on the needs and limitations of the users. An instance of this problem is the complex and poorly designed home automation front-ends that deter customers from investing in a home automation product. On the other hand, developing a usable and interactive interface is a complicated task for developers due to the multidisciplinary challenges that need to be identified and solved. In this context, the current research work investigates the different design problems associated with developing a home automation interface as well as the existing design solutions that are applied to these problems. The Qualitative Data Analysis approach was used for collecting data from research papers and the open coding process was used to cluster the findings. From the analysis of the data collected, requirements for designing the interface were derived. A home energy management functionality for a Web-based home automation front-end was developed as a proof-of-concept and a user evaluation was used to assess the usability of the interface. The results of the evaluation showed that this holistic approach to designing interfaces improved its usability which increases the chances of its commercial success.
Resumo:
La necesidad cotidiana de los ciudadanos de desplazarse para realizar diferentes actividades, sea cual fuere su naturaleza, se ha visto afectada en gran medida por los cambios producidos. Las ventajas generadas por la inclusión de la bicicleta como modo de transporte y la proliferación de su uso entre la ciudadanía son innumerables y se extienden tanto en el ámbito de la movilidad urbana como del desarrollo sostenible. En la actualidad, hay multitud de programas para la implantación, fomento o aumento de la participación ciudadana relacionado con la bicicleta en las ciudades. Pero en definitiva, todos y cada uno de estas iniciativas tienen la misma finalidad, crear una malla de vías cicladles eficaz y útil. Capaces de permitir el uso de la bicicleta en vías preferentes con unas garantías de seguridad altas, incorporando la bicicleta en el modelo de intermodalidad del transporte urbano. Con la progresiva implantación del carril bici, muchas personas han empezado a utilizarlas para moverse por la ciudad. Pero todo lo nuevo necesita un periodo de adaptación. Y, la realidad es que la red de viales destinados para estos vehículos está repleta de obstáculos para el ciclista. La actual situación ha llevado a cuestionar qué cantidad de kilómetros de carriles bici son necesarios para abastecer la demanda existente de este modo de transporte y, si las obras ejecutadas y proyectadas son las correctas y suficientes. En este trabajo se presenta una herramienta, basada en un modelo de programación matemática, para el diseño óptimo de una red destinada a los ciclistas. En concreto, el sistema determina una infraestructura para la bicicleta adaptada a las características de la red de carreteras existentes, con base en criterios de teoría de grafos ponderados. Como una aplicación del modelo propuesto, se ofrece el resultado de estos experimentos, obteniéndose un número de conclusiones útiles para la planificación y el diseño de redes de carriles bici desde una perspectiva social. Se realiza una aplicación de la metodología desarrollada para el caso real del municipio de Málaga (España). Por último se produce la validación del modelo de optimización presentado y la repercusión que tiene éste sobre el resultado final y la importancia o el peso del total de variables capaces de condicionar el resultado final de la red ciclista. Se obtiene, por tanto, una herramienta destinada a la mejora de la planificación, diseño y gestión de las diferentes infraestructuras para la bicicleta, con capacidad de interactuar con el modelo de red vial actual y con el resto de los modos de transportes existentes en el entramado urbano de las ciudades.
Design and analysis of an efficient neural network model for solving nonlinear optimization problems
Resumo:
This paper presents an efficient approach based on a recurrent neural network for solving constrained nonlinear optimization. More specifically, a modified Hopfield network is developed, and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it handles optimization and constraint terms in different stages with no interference from each other. Moreover, the proposed approach does not require specification for penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyse its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.