930 resultados para Event Management
Resumo:
The operation of supply chains (SCs) has for many years been focused on efficiency, leanness and responsiveness. This has resulted in reduced slack in operations, compressed cycle times, increased productivity and minimised inventory levels along the SC. Combined with tight tolerance settings for the realisation of logistics and production processes, this has led to SC performances that are frequently not robust. SCs are becoming increasingly vulnerable to disturbances, which can decrease the competitive power of the entire chain in the market. Moreover, in the case of food SCs non-robust performances may ultimately result in empty shelves in grocery stores and supermarkets.
The overall objective of this research is to contribute to Supply Chain Management (SCM) theory by developing a structured approach to assess SC vulnerability, so that robust performances of food SCs can be assured. We also aim to help companies in the food industry to evaluate their current state of vulnerability, and to improve their performance robustness through a better understanding of vulnerability issues. The following research questions (RQs) stem from these objectives:
RQ1: What are the main research challenges related to (food) SC robustness?
RQ2: What are the main elements that have to be considered in the design of robust SCs and what are the relationships between these elements?
RQ3: What is the relationship between the contextual factors of food SCs and the use of disturbance management principles?
RQ4: How to systematically assess the impact of disturbances in (food) SC processes on the robustness of (food) SC performances?
To answer these RQs we used different methodologies, both qualitative and quantitative. For each question, we conducted a literature survey to identify gaps in existing research and define the state of the art of knowledge on the related topics. For the second and third RQ, we conducted both exploration and testing on selected case studies. Finally, to obtain more detailed answers to the fourth question, we used simulation modelling and scenario analysis for vulnerability assessment.
Main findings are summarised as follows.
Based on an extensive literature review, we answered RQ1. The main research challenges were related to the need to define SC robustness more precisely, to identify and classify disturbances and their causes in the context of the specific characteristics of SCs and to make a systematic overview of (re)design strategies that may improve SC robustness. Also, we found that it is useful to be able to discriminate between varying degrees of SC vulnerability and to find a measure that quantifies the extent to which a company or SC shows robust performances when exposed to disturbances.
To address RQ2, we define SC robustness as the degree to which a SC shows an acceptable performance in (each of) its Key Performance Indicators (KPIs) during and after an unexpected event that caused a disturbance in one or more logistics processes. Based on the SCM literature we identified the main elements needed to achieve robust performances and structured them together to form a conceptual framework for the design of robust SCs. We then explained the logic of the framework and elaborate on each of its main elements: the SC scenario, SC disturbances, SC performance, sources of food SC vulnerability, and redesign principles and strategies.
Based on three case studies, we answered RQ3. Our major findings show that the contextual factors have a consistent relationship to Disturbance Management Principles (DMPs). The product and SC environment characteristics are contextual factors that are hard to change and these characteristics initiate the use of specific DMPs as well as constrain the use of potential response actions. The process and the SC network characteristics are contextual factors that are easier to change, and they are affected by the use of the DMPs. We also found a notable relationship between the type of DMP likely to be used and the particular combination of contextual factors present in the observed SC.
To address RQ4, we presented a new method for vulnerability assessments, the VULA method. The VULA method helps to identify how much a company is underperforming on a specific Key Performance Indicator (KPI) in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision maker about whether process redesign is needed and what kind of redesign strategies should be used in order to increase the SC’s robustness. The VULA method is demonstrated in the context of a meat SC using discrete-event simulation. The case findings show that performance robustness can be assessed for any KPI using the VULA method.
To sum-up the project, all findings were incorporated within an integrated framework for designing robust SCs. The integrated framework consists of the following steps: 1) Description of the SC scenario and identification of its specific contextual factors; 2) Identification of disturbances that may affect KPIs; 3) Definition of the relevant KPIs and identification of the main disturbances through assessment of the SC performance robustness (i.e. application of the VULA method); 4) Identification of the sources of vulnerability that may (strongly) affect the robustness of performances and eventually increase the vulnerability of the SC; 5) Identification of appropriate preventive or disturbance impact reductive redesign strategies; 6) Alteration of SC scenario elements as required by the selected redesign strategies and repeat VULA method for KPIs, as defined in Step 3.
Contributions of this research are listed as follows. First, we have identified emerging research areas - SC robustness, and its counterpart, vulnerability. Second, we have developed a definition of SC robustness, operationalized it, and identified and structured the relevant elements for the design of robust SCs in the form of a research framework. With this research framework, we contribute to a better understanding of the concepts of vulnerability and robustness and related issues in food SCs. Third, we identified the relationship between contextual factors of food SCs and specific DMPs used to maintain robust SC performances: characteristics of the product and the SC environment influence the selection and use of DMPs; processes and SC networks are influenced by DMPs. Fourth, we developed specific metrics for vulnerability assessments, which serve as a basis of a VULA method. The VULA method investigates different measures of the variability of both the duration of impacts from disturbances and the fluctuations in their magnitude.
With this project, we also hope to have delivered practical insights into food SC vulnerability. First, the integrated framework for the design of robust SCs can be used to guide food companies in successful disturbance management. Second, empirical findings from case studies lead to the identification of changeable characteristics of SCs that can serve as a basis for assessing where to focus efforts to manage disturbances. Third, the VULA method can help top management to get more reliable information about the “health” of the company.
The two most important research opportunities are: First, there is a need to extend and validate our findings related to the research framework and contextual factors through further case studies related to other types of (food) products and other types of SCs. Second, there is a need to further develop and test the VULA method, e.g.: to use other indicators and statistical measures for disturbance detection and SC improvement; to define the most appropriate KPI to represent the robustness of a complete SC. We hope this thesis invites other researchers to pick up these challenges and help us further improve the robustness of (food) SCs.
Resumo:
One of the crucial aspects of disaster management of emergency situations is the early assessment of needs and damages. In most disaster situations, higher fatality and increased casualty results from lack of access to timely available emergency services rather than the initial disaster itself. This is usually caused by lack of access to the affected area in order to properly assess the situation for relevant and urgent measures. Cognitive wireless sensor networks provide an opportunity to overcome this situation especially through interconnection via mobile systems. This paper presents a cognitive wireless sensor mobile networks-based framework (CoWiSMoN), designed to offer real-time emergency services to victims and rescue personnel in event of disasters. Critical issues underlying the implementation of such a system are discussed and analyzed.
Resumo:
To provide in-time reactions to a large volume of surveil- lance data, uncertainty-enabled event reasoning frameworks for CCTV and sensor based intelligent surveillance system have been integrated to model and infer events of interest. However, most of the existing works do not consider decision making under uncertainty which is important for surveillance operators. In this paper, we extend an event reasoning framework for decision support, which enables our framework to predict, rank and alarm threats from multiple heterogeneous sources.
Resumo:
On June 27th 2012, the Deputy First Minister of Northern Ireland and former IRA commander, Martin McGuinness shook hands with Queen Elizabeth II for the first time at an event in Belfast. For many the gesture symbolised the consolidation of Northern Ireland's transition to peace, the meeting of cultures and traditions, and hope for the future. Only a few weeks later however violence spilled onto the streets of north and west Belfast following a series of commemorative parades, marking a summer of hostilities. Those hostilities spread into a winter of protest, riot and discontent around flags and emblems and a year of tensions and commemorative-related violence marked again by a summer of rioting and protest in 2013. Outwardly these examples present two very different pictures of the 'new' Northern Ireland; the former of a society moving forward and putting the past behind it and the latter apparently divided over and wedded to different constructions of the past. Furthermore they revealed two very different 'places', the public handshake in the arena of public space; the rioting and fighting occurring in spaces distanced from the public sphere. This paper has also illustrated the difficulties around the ‘public management’ of conflict and transition as many within public agencies struggle with duties to uphold good relations and promote good governance within an environment of political strife, hostility and continuing violence.
This paper presents the key findings and implications of an exploratory project funded by the Arts and Humanities Research Council, explored the phenomenon of commemorative-related violence in Northern Ireland. We focus on 1) why the performance or celebration of the past can sometimes lead to violence in specific places; 2) map and analyse the levels of commemorative related violence in the past 15 years and 3) look at the public management implications of both conflict and transition at a strategic level within the public sector.
Resumo:
The UK’s transport infrastructure is one of the most heavily used in the world. The performance of these networks is critically dependent on the performance of cutting and embankment slopes which make up £20B of the £60B asset value of major highway infrastructure alone. The rail network in particular is also one of the oldest in the world: many of these slopes are suffering high incidents of instability (increasing with time). This paper describes the development of a fundamental understanding of earthwork material and system behaviour, through the systematic integration of research across a range of spatial and temporal scales. Spatially these range from microscopic studies of soil fabric, through elemental materials behaviour to whole slope modelling and monitoring and scaling up to transport networks. Temporally, historical and current weather event sequences are being used to understand and model soil deterioration processes, and climate change scenarios to examine their potential effects on slope performance in futures up to and including the 2080s. The outputs of this research are being mapped onto the different spatial and temporal scales of infrastructure slope asset management to inform the design of new slopes through to changing the way in which investment is made into aging assets. The aim ultimately is to help create a more reliable, cost effective, safer and more resilient transport system.
Resumo:
Without human beings, and human activities, hazards can strike but disasters cannot occur, they are not just natural phenomena but a social event (Van Der Zon, 2005). The rapid demand for reconstruction after disastrous events can result in the impacts of projects not being carefully considered from the outset and the opportunity to improve long-term physical and social community structures being neglected. The events that struck Banda Aceh in 2004 have been described as
a story of ‘two tsunamis’, the first being the natural hazard that struck and the second being the destruction of social structures that occurred as a result of unplanned, unregulated and uncoordinated response (Syukrizal et al, 2009). Measures must be in place to ensure that, while aiming to meet reconstruction
needs as rapidly as possible, the risk of re-occurring disaster impacts are reduced through both the physical structures and the capacity of the community who inhabit them. The paper explores issues facing reconstruction in a post-disaster scenario, drawing on the connections between physical and social reconstruction in order to address long term recovery solutions. It draws on a study of relevant literature and a six week pilot study spent in Haiti exploring the progress of recovery in the Haitian capital and the limitations still restricting reconstruction efforts. The study highlights the need for recovery management strategies that recognise the link between social and physical reconstruction and the significance of community based initiatives that see local residents driving recovery in terms of debris handling and rebuilding. It demonstrates how a community driven approach to physical reconstruction could also address the social impacts of events that, in the case of places such as Haiti, are still dramatically restricting recovery efforts.
Resumo:
Os sistemas distribuídos embarcados (Distributed Embedded Systems – DES) têm sido usados ao longo dos últimos anos em muitos domínios de aplicação, da robótica, ao controlo de processos industriais passando pela aviónica e pelas aplicações veiculares, esperando-se que esta tendência continue nos próximos anos. A confiança no funcionamento é uma propriedade importante nestes domínios de aplicação, visto que os serviços têm de ser executados em tempo útil e de forma previsível, caso contrário, podem ocorrer danos económicos ou a vida de seres humanos poderá ser posta em causa. Na fase de projecto destes sistemas é impossível prever todos os cenários de falhas devido ao não determinismo do ambiente envolvente, sendo necessária a inclusão de mecanismos de tolerância a falhas. Adicionalmente, algumas destas aplicações requerem muita largura de banda, que também poderá ser usada para a evolução dos sistemas, adicionandolhes novas funcionalidades. A flexibilidade de um sistema é uma propriedade importante, pois permite a sua adaptação às condições e requisitos envolventes, contribuindo também para a simplicidade de manutenção e reparação. Adicionalmente, nos sistemas embarcados, a flexibilidade também é importante por potenciar uma melhor utilização dos, muitas vezes escassos, recursos existentes. Uma forma evidente de aumentar a largura de banda e a tolerância a falhas dos sistemas embarcados distribuídos é a replicação dos barramentos do sistema. Algumas soluções existentes, quer comerciais quer académicas, propõem a replicação dos barramentos para aumento da largura de banda ou para aumento da tolerância a falhas. No entanto e quase invariavelmente, o propósito é apenas um, sendo raras as soluções que disponibilizam uma maior largura de banda e um aumento da tolerância a falhas. Um destes raros exemplos é o FlexRay, com a limitação de apenas ser permitido o uso de dois barramentos. Esta tese apresentada e discute uma proposta para usar a replicação de barramentos de uma forma flexível com o objectivo duplo de aumentar a largura de banda e a tolerância a falhas. A flexibilidade dos protocolos propostos também permite a gestão dinâmica da topologia da rede, sendo o número de barramentos apenas limitado pelo hardware/software. As propostas desta tese foram validadas recorrendo ao barramento de campo CAN – Controller Area Network, escolhido devido à sua grande implantação no mercado. Mais especificamente, as soluções propostas foram implementadas e validadas usando um paradigma que combina flexibilidade com comunicações event-triggered e time-triggered: o FTT – Flexible Time- Triggered. No entanto, uma generalização para CAN nativo é também apresentada e discutida. A inclusão de mecanismos de replicação do barramento impõe a alteração dos antigos protocolos de replicação e substituição do nó mestre, bem como a definição de novos protocolos para esta finalidade. Este trabalho tira partido da arquitectura centralizada e da replicação do nó mestre para suportar de forma eficiente e flexível a replicação de barramentos. Em caso de ocorrência de uma falta num barramento (ou barramentos) que poderia provocar uma falha no sistema, os protocolos e componentes propostos nesta tese fazem com que o sistema reaja, mudando para um modo de funcionamento degradado. As mensagens que estavam a ser transmitidas nos barramentos onde ocorreu a falta são reencaminhadas para os outros barramentos. A replicação do nó mestre baseia-se numa estratégia líder-seguidores (leaderfollowers), onde o líder (leader) controla todo o sistema enquanto os seguidores (followers) servem como nós de reserva. Se um erro ocorrer no nó líder, um dos nós seguidores passará a controlar o sistema de uma forma transparente e mantendo as mesmas funcionalidades. As propostas desta tese foram também generalizadas para CAN nativo, tendo sido para tal propostos dois componentes adicionais. É, desta forma possível ter as mesmas capacidades de tolerância a falhas ao nível dos barramentos juntamente com a gestão dinâmica da topologia de rede. Todas as propostas desta tese foram implementadas e avaliadas. Uma implementação inicial, apenas com um barramento foi avaliada recorrendo a uma aplicação real, uma equipa de futebol robótico onde o protocolo FTT-CAN foi usado no controlo de movimento e da odometria. A avaliação do sistema com múltiplos barramentos foi feita numa plataforma de teste em laboratório. Para tal foi desenvolvido um sistema de injecção de faltas que permite impor faltas nos barramentos e nos nós mestre, e um sistema de medida de atrasos destinado a medir o tempo de resposta após a ocorrência de uma falta.
Resumo:
Variability management is one of the main activities in the Software Product Line Engineering process. Common and varied features of related products are modelled along with the dependencies and relationships among them. With the increase in size and complexity of product lines and the more holistic systems approach to the design process, managing the ever- growing variability models has become a challenge. In this paper, we present MUSA, a tool for managing variability and features in large-scale models. MUSA adopts the Separation of Concerns design principle by providing multiple perspectives to the model, each conveying different set of information. The demonstration is conducted using a real-life model (comprising of 1000+ features) particularly showing the Structural View, which is displayed using a mind-mapping visualisation technique (hyperbolic trees), and the Dependency View, which is displayed graphically using logic gates.
Resumo:
The large penetration of intermittent resources, such as solar and wind generation, involves the use of storage systems in order to improve power system operation. Electric Vehicles (EVs) with gridable capability (V2G) can operate as a means for storing energy. This paper proposes an algorithm to be included in a SCADA (Supervisory Control and Data Acquisition) system, which performs an intelligent management of three types of consumers: domestic, commercial and industrial, that includes the joint management of loads and the charge/discharge of EVs batteries. The proposed methodology has been implemented in a SCADA system developed by the authors of this paper – the SCADA House Intelligent Management (SHIM). Any event in the system, such as a Demand Response (DR) event, triggers the use of an optimization algorithm that performs the optimal energy resources scheduling (including loads and EVs), taking into account the priorities of each load defined by the installation users. A case study considering a specific consumer with several loads and EVs is presented in this paper.
Resumo:
In future power systems, in the smart grid and microgrids operation paradigms, consumers can be seen as an energy resource with decentralized and autonomous decisions in the energy management. It is expected that each consumer will manage not only the loads, but also small generation units, heating systems, storage systems, and electric vehicles. Each consumer can participate in different demand response events promoted by system operators or aggregation entities. This paper proposes an innovative method to manage the appliances on a house during a demand response event. The main contribution of this work is to include time constraints in resources management, and the context evaluation in order to ensure the required comfort levels. The dynamic resources management methodology allows a better resources’ management in a demand response event, mainly the ones of long duration, by changing the priorities of loads during the event. A case study with two scenarios is presented considering a demand response with 30 min duration, and another with 240 min (4 h). In both simulations, the demand response event proposes the power consumption reduction during the event. A total of 18 loads are used, including real and virtual ones, controlled by the presented house management system.
Resumo:
Multi-agent approaches have been widely used to model complex systems of distributed nature with a large amount of interactions between the involved entities. Power systems are a reference case, mainly due to the increasing use of distributed energy sources, largely based on renewable sources, which have potentiated huge changes in the power systems’ sector. Dealing with such a large scale integration of intermittent generation sources led to the emergence of several new players, as well as the development of new paradigms, such as the microgrid concept, and the evolution of demand response programs, which potentiate the active participation of consumers. This paper presents a multi-agent based simulation platform which models a microgrid environment, considering several different types of simulated players. These players interact with real physical installations, creating a realistic simulation environment with results that can be observed directly in the reality. A case study is presented considering players’ responses to a demand response event, resulting in an intelligent increase of consumption in order to face the wind generation surplus.
Resumo:
This study examined the operational planning, implementation and execution issues of major sport events, as well as the mitigation and management strategies used to address these issues, with the aim of determining best practices in sport event operational planning. The three Research Questions were: 1) What can previous major sport events provide to guide the operational management of future events? 2) What are the operational issues that arise in the planning and execution of a major sport event, how are they mitigated and what are the strategies used to deal with these issues? 3) What are the best practices for sport event operational planning and how can these practices aid future events? Data collection involved a modified Delphi technique that consisted of one round of in-depth interviews followed by two rounds of questionnaires. Both data collection and analysis were guided by an adaptation of the work of Parent, Rouillard & Leopkey (2011) with a focus on previously established issue and strategy categories. The results provided a list of Top 26 Prominent Issues and Top 17 Prominent Strategies with additional issue-strategy links that can be used to aid event managers producing future major sport events. The following issue categories emerged as having had the highest impact on previous major sport events that participants had managed: timing, funding and knowledge management. In addition, participants used strategies from the following categories most frequently: other, formalized agreements and communication.