977 resultados para resilience management
Resumo:
El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.
Resumo:
As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.
Resumo:
La perdurabilidad empresarial ha sido un tema recurrente en la literatura sobre dirección de empresas. A pesar de los avances, la liquidación de las empresas aumenta permanentemente. Buscando alternativas de mejora se estudia el caso de dos empresas cuadragenarias dedicadas a prestar servicios de consultoría en ingeniería eléctrica y civil que, en condiciones de crisis, implementaron acciones que les permitieron, no sólo mantenerse en el mercado sino también fortalecer su estructura financiera. Los resultados demostraron que un enfoque equilibrado caracterizado por la toma oportuna de decisiones y la definición e implementación de estrategias de negocio efectivas constituyen herramientas óptimas para asegurar un mayor grado de resiliencia empresarial.
Resumo:
This paper presents an approach to developing indicators for expressing resilience of a generic water supply system. The system is contextualised as a meta-system consisting of three subsystems to represent the water catchment and reservoir, treatment plant and the distribution system supplying the end-users. The level of final service delivery to end-users is considered as a surrogate measure of systemic resilience. A set of modelled relationships are used to explore relationships between system components when placed under simulated stress. Conceptual system behaviour of specific types of simulated pressure is created for illustration of parameters for indicator development. The approach is based on the hypothesis that an in-depth knowledge of resilience would enable development of decision support system capability which in turn will contribute towards enhanced management of a water supply system. In contrast to conventional water supply system management approaches, a resilience approach facilitates improvement in system efficiency by emphasising awareness of points-of-intervention where system managers can adjust operational control measures across the meta-system (and within subsystems) rather than expansion of the system in entirety in the form of new infrastructure development.
Resumo:
Climate change is predicted to increase the frequency and severity of extreme weather events which pose significant challenges to the ability of government and other relief agencies to plan for, cope with and respond to disasters. Consequently, it is important that communities in climate sensitive and potential disaster prone areas strengthen their resilience to natural disasters in order to expeditiously recover from potential disruptions and damage caused by disasters. Building self reliance and, particularly in the immediate aftermath of a disaster, can facilitate short-term and long-term community recovery. To build stronger and more resilient communities, it is essential to have a better understanding of their current resilience capabilities by assessing areas of strength, risks and vulnerabilities so that their strengths can be enhanced and the risks and vulnerability can be appropriately addressed and mitigated through capacity building programs. While a number of conceptual frameworks currently exist to assess the resilience level of communities to disasters, they have tended to differ on their emphasis, scope and definition of what constitutes community resilience and how community resilience can be most effectively and accurately assessed. These limitations are attributed to the common approach of viewing community resilience through a mono-disciplinary lens. To overcome this, this paper proposes an integrated conceptual framework that takes into account the complex interplay of environmental, social, governance, infrastructure and economic attributes associated with community resilience. The framework can be operationalised using a range of resilience indicators to suit the nature of a disaster and the specific characteristics of a study region.
Resumo:
It is only in recent years that the critical role that spatial data can play in disaster management and strengthening community resilience has been recognised. The recognition of this importance is singularly evident from the fact that in Australia spatial data is considered as soft infrastructure. In the aftermath of every disaster this importance is being increasingly strengthened with state agencies paying greater attention to ensuring the availability of accurate spatial data based on the lessons learnt. For example, the major flooding in Queensland during the summer of 2011 resulted in a comprehensive review of responsibilities and accountability for the provision of spatial information during such natural disasters. A high level commission of enquiry completed a comprehensive investigation of the 2011 Brisbane flood inundation event and made specific recommendations concerning the collection of and accessibility to spatial information for disaster management and for strengthening community resilience during and after a natural disaster. The lessons learnt and processes implemented were subsequently tested by natural disasters during subsequent years. This paper provides an overview of the practical implementation of the recommendations of the commission of enquiry. It focuses particularly on the measures adopted by the state agencies with the primary role for managing spatial data and the evolution of this role in Queensland State, Australia. The paper concludes with a review of the development of the role and the increasing importance of spatial data as an infrastructure for disaster planning and management which promotes the strengthening of community resilience.
Resumo:
Understanding dynamics of interactions between community groups and government agencies is crucial to improve community resilience for flood risk reduction through effective community engagement strategies. Overall, a variety of approaches are available, however they are limited in their application. Based on research of a case study in Kampung Melayu Village in Jakarta, further complexity in engaging community emerges in planning policy which requires the relocation of households living in floodplains. This complexity arises in decision-making processes due to barriers to communication. This obstacle highlights the need for a simplified approach for an effective flood risk management which will be further explored in this paper. Qualitative analyses will be undertaken following semi-structured interviews conducted with key actors within government agencies, non-governmental organisations (NGOs), and representatives of communities. The analyses involve investigation of barriers and constraints on community engagement in flood risk management, particularly relevant to collaboration mechanism, perception of risk, and technical literacy to flood risk. These analyses result in potential redirection of community consultation strategies to lead to a more effective collaboration among stakeholders in the decision-making processes. As a result, greater effectiveness in plan implementation of flood risk management potentially improves disaster resilience in the future.
Resumo:
In today’s world, supply chains are becoming more complex and more vulnerable due to increased interdependency of multiple threats. This paper investigates the vulnerability sources in context of sustainable supply chain in order to minimize the impact of uncertain events. The capability-based perspective is discussed in this paper to understand the strategies to improve the resilience of the supply chain. Paper argues that organisations must think beyond their boundaries to accumulate or integrate network resources and develop critical collaborative capabilities across the supply chain to successfully encounter future disruptions.
Resumo:
Indigenous communities have actively managed their environments for millennia using a diversity of resource use and conservation strategies. Clam gardens, ancient rock-walled intertidal beach terraces, represent one example of an early mariculture technology that may have been used to improve food security and confer resilience to coupled human-ocean systems. We surveyed a coastal landscape for evidence of past resource use and management to gain insight into ancient resource stewardship practices on the central coast of British Columbia, Canada. We found that clam gardens are embedded within a diverse portfolio of resource use and management strategies and were likely one component of a larger, complex resource management system. We compared clam diversity, density, recruitment, and biomass in three clam gardens and three unmodified nonwalled beaches. Evidence suggests that butter clams (Saxidomus gigantea) had 1.96 times the biomass and 2.44 times the density in clam gardens relative to unmodified beaches. This was due to a reduction in beach slope and thus an increase in the optimal tidal range where clams grow and survive best. The most pronounced differences in butter clam density between nonwalled beaches and clam gardens were found at high tidal elevations at the top of the beach. Finally, clam recruits (0.5-2 mm in length) tended to be greater in clam gardens compared to nonwalled beaches and may be attributed to the addition of shell hash by ancient people, which remains on the landscape today. As part of a broader social-ecological system, clam garden sites were among several modifications made by humans that collectively may have conferred resilience to past communities by providing reliable and diverse access to food resources.
Resumo:
International audience
Resumo:
Prior resilience research typically focuses on either the individual or the organisational level of analysis, emphasises resilience in relation to day-to-day stressors rather than extreme events and is empirically under-developed. In response, our study inductively theorises about the relationships between individual and organisational resilience, drawing upon a large-scale study of resilience work in UK and French organisations. Our first-hand accounts of resilience work reveal the micro-processes involved in producing resilient organisations, and highlight the challenges experienced in doing resilience work in large organisations. We show that these micro-processes have significant implications for resilience at both individual and organisational levels, and draw implications for how HRM interventions can help to promote individual, and thus organisational, resilience.
Resumo:
Public and private sector organisations worldwide are putting strategies in place to manage the commercial and operational risks of climate change. However, community organisations are lagging behind in their understanding and preparedness, despite them being among the most exposed to the effects of climate change impacts and regulation. This poster presents a proposal for a multidisciplinary study that addresses this issue by developing, testing and applying a novel climate risk assessment methodology that is tailored to the needs of Australia’s community sector and its clients. Strategies to mitigate risks and build resilience and adaptive capacity will be identified including new opportunities afforded by urban informatics, social media, and technologies of scale making.