908 resultados para Discrete Events Simulation
Resumo:
At the mid-latitudes of Utopia Planitia (UP), Mars, a suite of spatially-associated landforms exhibit geomorphological traits that, on Earth, would be consistent with periglacial processes and the possible freeze-thaw cycling of water. The suite comprises small-sized polygonally-patterned ground, polygon-junction and -margin pits, and scalloped, rimless depressions. Typically, the landforms incise a dark-toned terrain that is thought to be ice-rich. Here, we investigate the dark-toned terrain by using high resolution images from the HiRISE as well as near-infrared spectral-data from the OMEGA and CRISM. The terrain displays erosional characteristics consistent with a sedimentary nature and near-infrared spectra characterised by a blue slope similar to that of weathered basaltic-tephra. We also describe volcanic terrain that is dark-toned and periglacially-modified in the Kamchatka mountain-range of eastern Russia. The terrain is characterised by weathered tephra inter-bedded with snow, ice-wedge polygons and near-surface excess ice. The excess ice forms in the pore space of the tephra as the result of snow-melt infiltration and, subsequently, in-situ freezing. Based on this possible analogue, we construct a three-stage mechanism that explains the possible ice-enrichment of a broad expanse of dark-toned terrain at the mid-latitudes of UP: (1) the dark-toned terrain accumulates and forms via the regional deposition of sediments sourced from explosive volcanism; (2) the volcanic sediments are blanketed by atmospherically-precipitated (H2O) snow, ice or an admixture of the two, either concurrent with the volcanic-events or between discrete events; and, (3) under the influence of high obliquity or explosive volcanism, boundary conditions tolerant of thaw evolve and this, in turn, permits the migration, cycling and eventual formation of excess ice in the volcanic sediments. Over time, and through episodic iterations of this scenario, excess ice forms to decametres of depth. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The Ampère Seamount, 600 km west of Gibraltar, is one of nine inactive volcanoes along a bent chain, the so called Horseshoe Seamounts. All of them ascend from an abyssal plain of 4000 to 4800 m depth up to a few hundred meters below the sea surface, except two, which nearly reach the surface: the Ampère massif on the southern flank of the group and the summit of the Gorringe bank in the north. The horseshoe, serrated like a crown, opens towards Gibraltar and stands in the way of its outflow. These seamounts are part of the Azores-Gibraltar structure, which marks the boundary between two major tectonic plates: the Eurasian and the African plate. The submarine volcanism which formed the Horseshoe Seamounts belongs to the sea floor spread area of the Mid-Atlantic Ridge. The maximum activity was between 17 and 10 Million years ago and terminated thereafter. The volcanoes consist of basalts and tuffs. Most of their flanks and the abyssal plain around are covered by sediments of micro-organic origin. These sediments, in particular their partial absence on the upper flanks are a circumstantial proof and a kind of diary of the initial rise and subsequent subsidence of about 6oo m of these seamounts. The horizons of erosion where the basalt substrate is laid bare indicate the rise above sea level in the past. Since the Ampère summit is 60 m deep today, this volcano must have been an island 500 m high. The stratification of the sediments covering the surrounding abyssal plain reveals discrete events of downslope suspension flows, called turbidites, separated by tens of thousands of years and perhaps induced by changes in climate conditions. The Ampère sea mount of 4800 m height and a base diameter of 50 km exceeds the size of the Mont Blanc massif. Its southern and eastern flanks are steep with basalts cropping out, in parts with nearly vertical walls of some hundred meters. The west and north sides consist of terraces and plateaus covered with sediments at 140 m, 400 m, 2000 m, and 3500 m. The Horseshoe Seamount area is also remarkable as a kind of disturbed crossing of three major oceanic flow systems at different depths and directions with forced upwelling and partial mixing of the water masses. Most prominent is the Mediterranean Outflow Water (MOW) with its higher temperature and salinity between 900 to 1500 m depth. It enters the horseshoe unimpaired from the open eastern side but penetrates the seamount chain through its valleys on the west, thereafter diverging and crossing the entire Atlantic Ocean. Below the MOW is the North Atlantic Deep Water (NADW) between 2000 m to 3000 m depth flowing southward and finally there is the Antarctic Bottom Water (AABW) flowing northward below the two other systems.
Resumo:
The Agent-Based Modelling and simulation (ABM) is a rather new approach for studying complex systems withinteracting autonomous agents that has lately undergone great growth in various fields such as biology, physics, social science, economics and business. Efforts to model and simulate the highly complex cement hydration process have been made over the past 40 years, with the aim of predicting the performance of concrete and designing innovative and enhanced cementitious materials. The ABM presented here - based on previous work - focuses on the early stages of cement hydration by modelling the physical-chemical processes at the particle level. The model considers the cement hydration process as a time and 3D space system, involving multiple diffusing and reacting species of spherical particles. Chemical reactions are simulated by adaptively selecting discrete stochastic simulation for the appropriate reaction, whenever that is necessary. Interactions between particles are also considered. The model has been inspired by reported cellular automata?s approach which provides detailed predictions of cement microstructure at the expense of significant computational difficulty. The ABM approach herein seeks to bring about an optimal balance between accuracy and computational efficiency.
Resumo:
Purpose – The purpose of this paper is to present a simulation‐based evaluation method for the comparison of different organizational forms and software support levels in the field of supply chain management (SCM). Design/methodology/approach – Apart from widely known logistic performance indicators, the discrete event simulation model considers explicitly coordination cost as stemming from iterative administration procedures. Findings - The method is applied to an exemplary supply chain configuration considering various parameter settings. Curiously, additional coordination cost does not always result in improved logistic performance. Influence factor variations lead to different organizational recommendations. The results confirm the high importance of (up to now) disregarded dimensions when evaluating SCM concepts and IT tools. Research limitations/implications – The model is based on simplified product and network structures. Future research shall include more complex, real world configurations. Practical implications – The developed method is designed for the identification of improvement potential when SCM software is employed. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and the method provides a comprehensive tool for strategic IT decision making. Originality/value – Reviewed literature is mostly focused on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but associated coordination cost has not been addressed by researchers.
Resumo:
Los sistemas transaccionales tales como los programas informáticos para la planificación de recursos empresariales (ERP software) se han implementado ampliamente mientras que los sistemas analíticos para la gestión de la cadena de suministro (SCM software) no han tenido el éxito deseado por la industria de tecnología de información (TI). Aunque se documentan beneficios importantes derivados de las implantaciones de SCM software, las empresas industriales son reacias a invertir en este tipo de sistemas. Por una parte esto es debido a la falta de métodos que son capaces de detectar los beneficios por emplear esos sistemas, y por otra parte porque el coste asociado no está identificado, detallado y cuantificado suficientemente. Los esquemas de coordinación basados únicamente en sistemas ERP son alternativas válidas en la práctica industrial siempre que la relación coste-beneficio esta favorable. Por lo tanto, la evaluación de formas organizativas teniendo en cuenta explícitamente el coste debido a procesos administrativos, en particular por ciclos iterativos, es de gran interés para la toma de decisiones en el ámbito de inversiones en TI. Con el fin de cerrar la brecha, el propósito de esta investigación es proporcionar métodos de evaluación que permitan la comparación de diferentes formas de organización y niveles de soporte por sistemas informáticos. La tesis proporciona una amplia introducción, analizando los retos a los que se enfrenta la industria. Concluye con las necesidades de la industria de SCM software: unas herramientas que facilitan la evaluación integral de diferentes propuestas de organización. A continuación, la terminología clave se detalla centrándose en la teoría de la organización, las peculiaridades de inversión en TI y la tipología de software de gestión de la cadena de suministro. La revisión de la literatura clasifica las contribuciones recientes sobre la gestión de la cadena de suministro, tratando ambos conceptos, el diseño de la organización y su soporte por las TI. La clasificación incluye criterios relacionados con la metodología de la investigación y su contenido. Los estudios empíricos en el ámbito de la administración de empresas se centran en tipologías de redes industriales. Nuevos algoritmos de planificación y esquemas de coordinación innovadoras se desarrollan principalmente en el campo de la investigación de operaciones con el fin de proponer nuevas funciones de software. Artículos procedentes del área de la gestión de la producción se centran en el análisis de coste y beneficio de las implantaciones de sistemas. La revisión de la literatura revela que el éxito de las TI para la coordinación de redes industriales depende en gran medida de características de tres dimensiones: la configuración de la red industrial, los esquemas de coordinación y las funcionalidades del software. La literatura disponible está enfocada sobre todo en los beneficios de las implantaciones de SCM software. Sin embargo, la coordinación de la cadena de suministro, basándose en el sistema ERP, sigue siendo la práctica industrial generalizada, pero el coste de coordinación asociado no ha sido abordado por los investigadores. Los fundamentos de diseño organizativo eficiente se explican en detalle en la medida necesaria para la comprensión de la síntesis de las diferentes formas de organización. Se han generado varios esquemas de coordinación variando los siguientes parámetros de diseño: la estructura organizativa, los mecanismos de coordinación y el soporte por TI. Las diferentes propuestas de organización desarrolladas son evaluadas por un método heurístico y otro basado en la simulación por eventos discretos. Para ambos métodos, se tienen en cuenta los principios de la teoría de la organización. La falta de rendimiento empresarial se debe a las dependencias entre actividades que no se gestionan adecuadamente. Dentro del método heurístico, se clasifican las dependencias y se mide su intensidad basándose en factores contextuales. A continuación, se valora la idoneidad de cada elemento de diseño organizativo para cada dependencia específica. Por último, cada forma de organización se evalúa basándose en la contribución de los elementos de diseño tanto al beneficio como al coste. El beneficio de coordinación se refiere a la mejora en el rendimiento logístico - este concepto es el objeto central en la mayoría de modelos de evaluación de la gestión de la cadena de suministro. Por el contrario, el coste de coordinación que se debe incurrir para lograr beneficios no se suele considerar en detalle. Procesos iterativos son costosos si se ejecutan manualmente. Este es el caso cuando SCM software no está implementada y el sistema ERP es el único instrumento de coordinación disponible. El modelo heurístico proporciona un procedimiento simplificado para la clasificación sistemática de las dependencias, la cuantificación de los factores de influencia y la identificación de configuraciones que indican el uso de formas organizativas y de soporte de TI más o menos complejas. La simulación de eventos discretos se aplica en el segundo modelo de evaluación utilizando el paquete de software ‘Plant Simulation’. Con respecto al rendimiento logístico, por un lado se mide el coste de fabricación, de inventario y de transporte y las penalizaciones por pérdida de ventas. Por otro lado, se cuantifica explícitamente el coste de la coordinación teniendo en cuenta los ciclos de coordinación iterativos. El método se aplica a una configuración de cadena de suministro ejemplar considerando diversos parámetros. Los resultados de la simulación confirman que, en la mayoría de los casos, el beneficio aumenta cuando se intensifica la coordinación. Sin embargo, en ciertas situaciones en las que se aplican ciclos de planificación manuales e iterativos el coste de coordinación adicional no siempre conduce a mejor rendimiento logístico. Estos resultados inesperados no se pueden atribuir a ningún parámetro particular. La investigación confirma la gran importancia de nuevas dimensiones hasta ahora ignoradas en la evaluación de propuestas organizativas y herramientas de TI. A través del método heurístico se puede comparar de forma rápida, pero sólo aproximada, la eficiencia de diferentes formas de organización. Por el contrario, el método de simulación es más complejo pero da resultados más detallados, teniendo en cuenta parámetros específicos del contexto del caso concreto y del diseño organizativo. ABSTRACT Transactional systems such as Enterprise Resource Planning (ERP) systems have been implemented widely while analytical software like Supply Chain Management (SCM) add-ons are adopted less by manufacturing companies. Although significant benefits are reported stemming from SCM software implementations, companies are reluctant to invest in such systems. On the one hand this is due to the lack of methods that are able to detect benefits from the use of SCM software and on the other hand associated costs are not identified, detailed and quantified sufficiently. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment in IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and corresponding methods are comprehensive tools for strategic IT decision making. The purpose of this research is to provide evaluation methods that allow the comparison of different organizational forms and software support levels. The research begins with a comprehensive introduction dealing with the business environment that industrial networks are facing and concludes highlighting the challenges for the supply chain software industry. Afterwards, the central terminology is addressed, focusing on organization theory, IT investment peculiarities and supply chain management software typology. The literature review classifies recent supply chain management research referring to organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management researchers realize cost-benefit analysis of IT software implementations. The literature review reveals that the success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. Reviewed literature is mostly centered on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but the associated coordination cost has not been addressed by researchers. Fundamentals of efficient organizational design are explained in detail as far as required for the understanding of the synthesis of different organizational forms. Several coordination schemes have been shaped through the variation of the following design parameters: organizational structuring, coordination mechanisms and software support. The different organizational proposals are evaluated using a heuristic approach and a simulation-based method. For both cases, the principles of organization theory are respected. A lack of performance is due to dependencies between activities which are not managed properly. Therefore, within the heuristic method, dependencies are classified and their intensity is measured based on contextual factors. Afterwards the suitability of each organizational design element for the management of a specific dependency is determined. Finally, each organizational form is evaluated based on the contribution of the sum of design elements to coordination benefit and to coordination cost. Coordination benefit refers to improvement in logistic performance – this is the core concept of most supply chain evaluation models. Unfortunately, coordination cost which must be incurred to achieve benefits is usually not considered in detail. Iterative processes are costly when manually executed. This is the case when SCM software is not implemented and the ERP system is the only available coordination instrument. The heuristic model provides a simplified procedure for the classification of dependencies, quantification of influence factors and systematic search for adequate organizational forms and IT support. Discrete event simulation is applied in the second evaluation model using the software package ‘Plant Simulation’. On the one hand logistic performance is measured by manufacturing, inventory and transportation cost and penalties for lost sales. On the other hand coordination cost is explicitly considered taking into account iterative coordination cycles. The method is applied to an exemplary supply chain configuration considering various parameter settings. The simulation results confirm that, in most cases, benefit increases when coordination is intensified. However, in some situations when manual, iterative planning cycles are applied, additional coordination cost does not always lead to improved logistic performance. These unexpected results cannot be attributed to any particular parameter. The research confirms the great importance of up to now disregarded dimensions when evaluating SCM concepts and IT tools. The heuristic method provides a quick, but only approximate comparison of coordination efficiency for different organizational forms. In contrast, the more complex simulation method delivers detailed results taking into consideration specific parameter settings of network context and organizational design.
Resumo:
Two sediment cores retrieved from the continental slope in the Benguela Upwelling System, GeoB 1706 (19°33.7'S 11°10.5'E) and GeoB 1711 (23°18.9'S, 12°22.6'E), reveal striking variations in planktonic foraminiferal abundances during the last 160,000 years. These fluctuations are investigated to assess changes in the intensity and position of the upwelling centres off Namibia. Four species make up over 95% of the variation within the core, and enable the record to be divided into episodes characterized by particular planktonic foraminiferal assemblages. The fossil assemblages have meaningful ecological significance when compared to those of the modern day and the relationship to their environment. The cold-water planktonic foraminifer, Neogloboquadrina pachyderma sinistral [N. pachyderma (s)], dominates the modern-day, coastal upwelling centres, and Neogloboquadrina pachyderma dextral and Globigerina bulloides characterize the fringes of the upwelling cells. Globorotalia inflata is representative of the offshore boundary between newly upwelled waters and the transitional, reduced nutrient levels of the subtropical waters. In the fossil record, episodes of high N. pachyderma (s) abundances are interpreted as evidence of increased upwelling intensity, and the associated increase in nutrients. The N. pachyderma (s) record suggests temporal shifts in the intensity of upwelling, and corresponding trophic domains, that do not follow the typical glacial-interglacial pattern. Periods of high N. pachyderma (s) abundance describe rapid, discrete events dominating isotope stages 3 and 2. The timing of these events correlates to the temporal shifts of the Angola-Benguela Front (Jansen et al., 1997) situated to the north of the Walvis Ridge. Absence of high abundances of N. pachyderma (s) from the continental slope of the southern Cape Basin indicates that Southern Ocean surface water advection has not exerted a major influence on the Benguela Current System. The coincidence of increased upwelling intensity with the movement of the Angola-Benguela Front can be interpreted mainly by changes in strength and zonality of the trade wind system.
Resumo:
In this paper we propose a fast adaptive Importance Sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First we estimate the minimum Cross-Entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level; finally, the tilting parameter just found is used to estimate the overflow probability of interest. We recognize three distinct properties of the method which together explain why the method works well; we conjecture that they hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.
Resumo:
Presents a simulation study of the costing of police custody operations at a UK police force. The custody operation incorporates the arrest, booking-in, interview, detention and court appearance activities. The Activity Based Costing (ABC) approach is used as a framework to show how costs are generated by the three “drivers” of cost, activity and resource. These relate to the design efficiency of the process, the timing and mix of demand on the process and the cost of resources used to undertake the process respectively. The use of discrete-event simulation allows the incorporation of dynamic (time-dependent) and stochastic (variability) elements in the cost analysis. This enables both the amount and timing of the use of capacity and the generation of cost to be established. The concept of committed and flexible resources directs management decisions to the redeployment of unused capacity or alternatively the identification of additional capacity requirements.
Resumo:
A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.
Resumo:
Theprocess of manufacturing system design frequently includes modeling, and usually, this means applying a technique such as discrete event simulation (DES). However, the computer tools currently available to apply this technique enable only a superficial representation of the people that operate within the systems. This is a serious limitation because the performance of people remains central to the competitiveness of many manufacturing enterprises. Therefore, this paper explores the use of probability density functions to represent the variation of worker activity times within DES models.
Resumo:
Discrete event simulation is a popular aid for manufacturing system design; however in application this technique can sometimes be unnecessarily complex. This paper is concerned with applying an alternative technique to manufacturing system design which may well provide an efficient form of rough-cut analysis. This technique is System Dynamics, and the work described in this paper has set about incorporating the principles of this technique into a computer based modelling tool that is tailored to manufacturing system design. This paper is structured to first explore the principles of System Dynamics and how they differ from Discrete Event Simulation. The opportunity for System Dynamics is then explored, and this leads to defining the capabilities that a suitable tool would need. This specification is then transformed into a computer modelling tool, which is then assessed by applying this tool to model an engine production facility. Read More: http://www.worldscientific.com/doi/abs/10.1142/S0219686703000228
Resumo:
Manufacturing system design is an ongoing activity within industry. Modelling tools based on Discrete Event Simulation are often used by practitioners during this design cycle. However, such tools do not adequately model the behaviour of 'direct' workers in manufacturing environments. There is an important need to expand the capability of modelling to include the relationships between human centred factors (demography, attitudes, beliefs, etc), their working environment (physical and organizational), and their subsequent performance in terms of productive routines. Therefore, this paper describes research that has formed a pilot modelling methodology that is an important first step in providing such a capability.
Resumo:
The computer simulation of manufacturing systems is commonly carried out using discrete event simulation (DES). Indeed, there appears to be a lack of applications of continuous simulation methods, particularly system dynamics (SD), despite evidence that this technique is suitable for industrial modelling. This paper investigates whether this is due to a decline in the general popularity of SD, or whether modelling of manufacturing systems represents a missed opportunity for SD. On this basis, the paper first gives a review of the concept of SD and fully describes the modelling technique. Following on, a survey of the published applications of SD in the 1990s is made by developing and using a structured classification approach. From this review, observations are made about the application of the SD method and opportunities for future research are suggested.
Resumo:
Purpose: Short product life cycle and/or mass customization necessitate reconfiguration of operational enablers of supply chain (SC) from time to time in order to harness high levels of performance. The purpose of this paper is to identify the key operational enablers under stochastic environment on which practitioner should focus while reconfiguring a SC network. Design/methodology/approach: The paper used interpretive structural modeling (ISM) approach that presents a hierarchy-based model and the mutual relationships among the enablers. The contextual relationship needed for developing structural self-interaction matrix (SSIM) among various enablers is realized by conducting experiments through simulation of a hypothetical SC network. Findings: The research identifies various operational enablers having a high driving power towards assumed performance measures. In this regard, these enablers require maximum attention and of strategic importance while reconfiguring SC. Practical implications: ISM provides a useful tool to the SC managers to strategically adopt and focus on the key enablers which have comparatively greater potential in enhancing the SC performance under given operational settings. Originality/value: The present research realizes the importance of SC flexibility under the premise of reconfiguration of the operational units in order to harness high value of SC performance. Given the resulting digraph through ISM, the decision maker can focus the key enablers for effective reconfiguration. The study is one of the first efforts that develop contextual relations among operational enablers for SSIM matrix through integration of discrete event simulation to ISM. © Emerald Group Publishing Limited.