886 resultados para simulation modelling
Resumo:
The in vivo faecal egg count reduction test (FECRT) is the most commonly used test to detect anthelmintic resistance (AR) in gastrointestinal nematodes (GIN) of ruminants in pasture based systems. However, there are several variations on the method, some more appropriate than others in specific circumstances. While in some cases labour and time can be saved by just collecting post-drench faecal worm egg counts (FEC) of treatment groups with controls, or pre- and post-drench FEC of a treatment group with no controls, there are circumstances when pre- and post-drench FEC of an untreated control group as well as from the treatment groups are necessary. Computer simulation techniques were used to determine the most appropriate of several methods for calculating AR when there is continuing larval development during the testing period, as often occurs when anthelmintic treatments against genera of GIN with high biotic potential or high re-infection rates, such as Haemonchus contortus of sheep and Cooperia punctata of cattle, are less than 100% efficacious. Three field FECRT experimental designs were investigated: (I) post-drench FEC of treatment and controls groups, (II) pre- and post-drench FEC of a treatment group only and (III) pre- and post-drench FEC of treatment and control groups. To investigate the performance of methods of indicating AR for each of these designs, simulated animal FEC were generated from negative binominal distributions with subsequent sampling from the binomial distributions to account for drench effect, with varying parameters for worm burden, larval development and drench resistance. Calculations of percent reductions and confidence limits were based on those of the Standing Committee for Agriculture (SCA) guidelines. For the two field methods with pre-drench FEC, confidence limits were also determined from cumulative inverse Beta distributions of FEC, for eggs per gram (epg) and the number of eggs counted at detection levels of 50 and 25. Two rules for determining AR: (1) %reduction (%R) < 95% and lower confidence limit <90%; and (2) upper confidence limit <95%, were also assessed. For each combination of worm burden, larval development and drench resistance parameters, 1000 simulations were run to determine the number of times the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been declared. When continuing larval development occurs during the testing period of the FECRT, the simulations showed AR should be calculated from pre- and post-drench worm egg counts of an untreated control group as well as from the treatment group. If the widely used resistance rule 1 is used to assess resistance, rule 2 should also be applied, especially when %R is in the range 90 to 95% and resistance is suspected.
Resumo:
The use of maize simulation models to determine the optimum plant population for rainfed environments allows the evaluation of plant populations over multiple years and locations at a lower cost than traditional field experimentation. However the APSIM maize model that has been used to conduct some of these 'virtual' experiments assumes that the maximum rate of soil water extraction by the crop root system is constant across plant populations. This untested assumption may cause grain yield to be overestimated in lower plant populations. A field experiment was conducted to determine whether maximum rates of water extraction vary with plant population, and the maximum rate of soil water extraction was estimated for three plant populations (2.4, 3.5 and 5.5 plants m(-2)) under water limited conditions. Maximum soil water extraction rates in the field experiment decreased linearly with plant population, and no difference was detected between plant populations for the crop lower limit of soil water extraction. Re-analysis of previous maize simulation experiments demonstrated that the use of inappropriately high extraction-rate parameters at low plant populations inflated predictions of grain yield, and could cause erroneous recommendations to be made for plant population. The results demonstrate the importance of validating crop simulation models across the range of intended treatments. (C) 2013 Elsevier E.V. All rights reserved.
Resumo:
Farming systems frameworks such as the Agricultural Production Systems simulator (APSIM) represent fluxes through the soil, plant and atmosphere of the system well, but do not generally consider the biotic constraints that function within the system. We designed a method that allowed population models built in DYMEX to interact with APSIM. The simulator engine component of the DYMEX population-modelling platform was wrapped within an APSIM module allowing it to get and set variable values in other APSIM models running in the simulation. A rust model developed in DYMEX is used to demonstrate how the developing rust population reduces the crop's green leaf area. The success of the linking process is seen in the interaction of the two models and how changes in rust population on the crop's leaves feedback to the APSIM crop modifying the growth and development of the crop's leaf area. This linking of population models to simulate pest populations and biophysical models to simulate crop growth and development increases the complexity of the simulation, but provides a tool to investigate biotic constraints within farming systems and further moves APSIM towards being an agro-ecological framework.
Resumo:
Assessing the impacts of climate variability on agricultural productivity at regional, national or global scale is essential for defining adaptation and mitigation strategies. We explore in this study the potential changes in spring wheat yields at Swift Current and Melfort, Canada, for different sowing windows under projected climate scenarios (i.e., the representative concentration pathways, RCP4.5 and RCP8.5). First, the APSIM model was calibrated and evaluated at the study sites using data from long term experimental field plots. Then, the impacts of change in sowing dates on final yield were assessed over the 2030-2099 period with a 1990-2009 baseline period of observed yield data, assuming that other crop management practices remained unchanged. Results showed that the performance of APSIM was quite satisfactory with an index of agreement of 0.80, R2 of 0.54, and mean absolute error (MAE) and root mean square error (RMSE) of 529 kg/ha and 1023 kg/ha, respectively (MAE = 476 kg/ha and RMSE = 684 kg/ha in calibration phase). Under the projected climate conditions, a general trend in yield loss was observed regardless of the sowing window, with a range from -24 to -94 depending on the site and the RCP, and noticeable losses during the 2060s and beyond (increasing CO2 effects being excluded). Smallest yield losses obtained through earlier possible sowing date (i.e., mid-April) under the projected future climate suggested that this option might be explored for mitigating possible adverse impacts of climate variability. Our findings could therefore serve as a basis for using APSIM as a decision support tool for adaptation/mitigation options under potential climate variability within Western Canada.
Resumo:
Much of our understanding and management of ecological processes requires knowledge of the distribution and abundance of species. Reliable abundance or density estimates are essential for managing both threatened and invasive populations, yet are often challenging to obtain. Recent and emerging technological advances, particularly in unmanned aerial vehicles (UAVs), provide exciting opportunities to overcome these challenges in ecological surveillance. UAVs can provide automated, cost-effective surveillance and offer repeat surveys for pest incursions at an invasion front. They can capitalise on manoeuvrability and advanced imagery options to detect species that are cryptic due to behaviour, life-history or inaccessible habitat. UAVs may also cause less disturbance, in magnitude and duration, for sensitive fauna than other survey methods such as transect counting by humans or sniffer dogs. The surveillance approach depends upon the particular ecological context and the objective. For example, animal, plant and microbial target species differ in their movement, spread and observability. Lag-times may exist between a pest species presence at a site and its detectability, prompting a need for repeat surveys. Operationally, however, the frequency and coverage of UAV surveys may be limited by financial and other constraints, leading to errors in estimating species occurrence or density. We use simulation modelling to investigate how movement ecology should influence fine-scale decisions regarding ecological surveillance using UAVs. Movement and dispersal parameter choices allow contrasts between locally mobile but slow-dispersing populations, and species that are locally more static but invasive at the landscape scale. We find that low and slow UAV flights may offer the best monitoring strategy to predict local population densities in transects, but that the consequent reduction in overall area sampled may sacrifice the ability to reliably predict regional population density. Alternative flight plans may perform better, but this is also dependent on movement ecology and the magnitude of relative detection errors for different flight choices. Simulated investigations such as this will become increasingly useful to reveal how spatio-temporal extent and resolution of UAV monitoring should be adjusted to reduce observation errors and thus provide better population estimates, maximising the efficacy and efficiency of unmanned aerial surveys.
Resumo:
Induction motor is a typical member of a multi-domain, non-linear, high order dynamic system. For speed control a three phase induction motor is modelled as a d–q model where linearity is assumed and non-idealities are ignored. Approximation of the physical characteristic gives a simulated behaviour away from the natural behaviour. This paper proposes a bond graph model of an induction motor that can incorporate the non-linearities and non-idealities thereby resembling the physical system more closely. The model is validated by applying the linearity and idealities constraints which shows that the conventional ‘abc’ model is a special case of the proposed generalised model.
Resumo:
In this study we analyze how the ion concentrations in forest soil solution are determined by hydrological and biogeochemical processes. A dynamic model ACIDIC was developed, including processes common to dynamic soil acidification models. The model treats up to eight interacting layers and simulates soil hydrology, transpiration, root water and nutrient uptake, cation exchange, dissolution and reactions of Al hydroxides in solution, and the formation of carbonic acid and its dissociation products. It includes also a possibility to a simultaneous use of preferential and matrix flow paths, enabling the throughfall water to enter the deeper soil layers in macropores without first reacting with the upper layers. Three different combinations of routing the throughfall water via macro- and micropores through the soil profile is presented. The large vertical gradient in the observed total charge was simulated succesfully. According to the simulations, gradient is mostly caused by differences in the intensity of water uptake, sulfate adsorption and organic anion retention at the various depths. The temporal variations in Ca and Mg concentrations were simulated fairly well in all soil layers. For H+, Al and K there were much more variation in the observed than in the simulated concentrations. Flow in macropores is a possible explanation for the apparent disequilibrium of the cation exchange for H+ and K, as the solution H+ and K concentrations have great vertical gradients in soil. The amount of exchangeable H+ increased in the O and E horizons and decreased in the Bs1 and Bs2 horizons, the net change in whole soil profile being a decrease. A large part of the decrease of the exchangeable H+ in the illuvial B horizon was caused by sulfate adsorption. The model produces soil water amounts and solution ion concentrations which are comparable to the measured values, and it can be used in both hydrological and chemical studies of soils.
Resumo:
The Jansen mechanism is a one degree-of-freedom, planar, 12-link, leg mechanism that can be used in mobile robotic applications and in gait analysis. This paper presents the kinematics and dynamics of the Jansen leg mechanism. The forward kinematics, accomplished using circle intersection method, determines the trajectories of various points on the mechanism in the chassis (stationary link) reference frame. From the foot point trajectory, the step length is shown to vary linearly while step height varies non-linearly with change in crank radius. A dynamic model for the Jansen leg mechanism is proposed using bond graph approach with modulated multiport transformers. For given ground reaction force pattern and crank angular speed, this model helps determine the motor torque profile as well as the link and joint stresses. The model can therefore be used to rate the actuator torque and in design of the hardware and controller for such a system. The kinematics of the mechanism can also be obtained from this dynamic model. The proposed model is thus a useful tool for analysis and design of systems based on the Jansen leg mechanism. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were “rare” in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the “rare” species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the “abundant” species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
The mobile cloud computing model promises to address the resource limitations of mobile devices, but effectively implementing this model is difficult. Previous work on mobile cloud computing has required the user to have a continuous, high-quality connection to the cloud infrastructure. This is undesirable and possibly infeasible, as the energy required on the mobile device to maintain a connection, and transfer sizeable amounts of data is large; the bandwidth tends to be quite variable, and low on cellular networks. The cloud deployment itself needs to efficiently allocate scalable resources to the user as well. In this paper, we formulate the best practices for efficiently managing the resources required for the mobile cloud model, namely energy, bandwidth and cloud computing resources. These practices can be realised with our mobile cloud middleware project, featuring the Cloud Personal Assistant (CPA). We compare this with the other approaches in the area, to highlight the importance of minimising the usage of these resources, and therefore ensure successful adoption of the model by end users. Based on results from experiments performed with mobile devices, we develop a no-overhead decision model for task and data offloading to the CPA of a user, which provides efficient management of mobile cloud resources.
Resumo:
Summary
1: Managing populations of predators and their prey to achieve conservation or resource management goals is usually technically challenging and frequently socially controversial. This is true even in the simplest ecosystems but can be made much worse when predator–prey relationships are in?uenced by complex interactions, such as biological invasions, population trends or animal movements.
2: Lough Neagh in Northern Ireland is a European stronghold for pollan Coregonus autumnalis, a coregonine ?sh and for river lampreyLampetra ?uviatilis, which feeds parasitically as an adult. Both species are of high conservation importance. Lampreys are known to consume pollan but detailed knowledge of their interactions is scant. While pollan is well known to be a landlocked species in Ireland, the life cycle of normally anadromous river lamprey in Lough Neagh has been unclear. The Lough is also a highly perturbed ecosystem, supporting several invasive, non-native ?sh species that have the potential to in?uence lamprey–pollan interactions.
3: We applied stable isotope techniques to resolve both the movement patterns of lamprey and trophic interactions in this complex community. Recognizing that stable isotope studies are often hampered by high-levels of variability and uncertainty in the systems of interest, we employed novel Bayesian mixing models, which incorporate variability and uncertainty.
4: Stable isotope analyses identi?ed troutSalmo trutta and non-native breamAbramis brama as the main items in lamprey diet. Pollan only represented a major food source for lamprey between May and July.
5: Stable isotope ratios of carbon in tissues from 71 adult lamprey showed no evidence of marine carbon sources, strongly suggesting that Lough Neagh is host to a highly unusual, nonanadromous freshwater population. This ?nding marks out the Lough’s lamprey population as of particular scienti?c interest and enhances the conservation signi?cance of this feature of the Lough.
6: Synthesis and applications.Our Bayesian isotopic mixing models illustrate an unusual pattern of animal movement, enhancing conservation interest in an already threatened population. We have also revealed a complex relationship between lamprey and their food species that is suggestive of hyperpredation, whereby non-native species may sustain high lamprey populations that may in turn be detrimental to native pollan.Long-term conservation of lamprey and pollan in this system is likely to require management intervention, but in light of this exceptional complexity, no simple management options are currently supported. Conservation plans will require better characterization ofpopulation-level interactions and simulation modelling of interventions. More generally, our study demonstrates the importance of considering a full range of possible trophic interactions, particularly in complex ecosystems, and highlights Bayesian isotopic mixing models as powerful tools in resolving trophic relationships.
Key-words: Bayesian, conservation dilemma, Coregonus autumnalis, hyperpredation, Lampetra ?uviatilis, pollan, potamodromous, River lamprey, stable isotope analysis in R, stable isotope
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
The operation of supply chains (SCs) has for many years been focused on efficiency, leanness and responsiveness. This has resulted in reduced slack in operations, compressed cycle times, increased productivity and minimised inventory levels along the SC. Combined with tight tolerance settings for the realisation of logistics and production processes, this has led to SC performances that are frequently not robust. SCs are becoming increasingly vulnerable to disturbances, which can decrease the competitive power of the entire chain in the market. Moreover, in the case of food SCs non-robust performances may ultimately result in empty shelves in grocery stores and supermarkets.
The overall objective of this research is to contribute to Supply Chain Management (SCM) theory by developing a structured approach to assess SC vulnerability, so that robust performances of food SCs can be assured. We also aim to help companies in the food industry to evaluate their current state of vulnerability, and to improve their performance robustness through a better understanding of vulnerability issues. The following research questions (RQs) stem from these objectives:
RQ1: What are the main research challenges related to (food) SC robustness?
RQ2: What are the main elements that have to be considered in the design of robust SCs and what are the relationships between these elements?
RQ3: What is the relationship between the contextual factors of food SCs and the use of disturbance management principles?
RQ4: How to systematically assess the impact of disturbances in (food) SC processes on the robustness of (food) SC performances?
To answer these RQs we used different methodologies, both qualitative and quantitative. For each question, we conducted a literature survey to identify gaps in existing research and define the state of the art of knowledge on the related topics. For the second and third RQ, we conducted both exploration and testing on selected case studies. Finally, to obtain more detailed answers to the fourth question, we used simulation modelling and scenario analysis for vulnerability assessment.
Main findings are summarised as follows.
Based on an extensive literature review, we answered RQ1. The main research challenges were related to the need to define SC robustness more precisely, to identify and classify disturbances and their causes in the context of the specific characteristics of SCs and to make a systematic overview of (re)design strategies that may improve SC robustness. Also, we found that it is useful to be able to discriminate between varying degrees of SC vulnerability and to find a measure that quantifies the extent to which a company or SC shows robust performances when exposed to disturbances.
To address RQ2, we define SC robustness as the degree to which a SC shows an acceptable performance in (each of) its Key Performance Indicators (KPIs) during and after an unexpected event that caused a disturbance in one or more logistics processes. Based on the SCM literature we identified the main elements needed to achieve robust performances and structured them together to form a conceptual framework for the design of robust SCs. We then explained the logic of the framework and elaborate on each of its main elements: the SC scenario, SC disturbances, SC performance, sources of food SC vulnerability, and redesign principles and strategies.
Based on three case studies, we answered RQ3. Our major findings show that the contextual factors have a consistent relationship to Disturbance Management Principles (DMPs). The product and SC environment characteristics are contextual factors that are hard to change and these characteristics initiate the use of specific DMPs as well as constrain the use of potential response actions. The process and the SC network characteristics are contextual factors that are easier to change, and they are affected by the use of the DMPs. We also found a notable relationship between the type of DMP likely to be used and the particular combination of contextual factors present in the observed SC.
To address RQ4, we presented a new method for vulnerability assessments, the VULA method. The VULA method helps to identify how much a company is underperforming on a specific Key Performance Indicator (KPI) in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision maker about whether process redesign is needed and what kind of redesign strategies should be used in order to increase the SC’s robustness. The VULA method is demonstrated in the context of a meat SC using discrete-event simulation. The case findings show that performance robustness can be assessed for any KPI using the VULA method.
To sum-up the project, all findings were incorporated within an integrated framework for designing robust SCs. The integrated framework consists of the following steps: 1) Description of the SC scenario and identification of its specific contextual factors; 2) Identification of disturbances that may affect KPIs; 3) Definition of the relevant KPIs and identification of the main disturbances through assessment of the SC performance robustness (i.e. application of the VULA method); 4) Identification of the sources of vulnerability that may (strongly) affect the robustness of performances and eventually increase the vulnerability of the SC; 5) Identification of appropriate preventive or disturbance impact reductive redesign strategies; 6) Alteration of SC scenario elements as required by the selected redesign strategies and repeat VULA method for KPIs, as defined in Step 3.
Contributions of this research are listed as follows. First, we have identified emerging research areas - SC robustness, and its counterpart, vulnerability. Second, we have developed a definition of SC robustness, operationalized it, and identified and structured the relevant elements for the design of robust SCs in the form of a research framework. With this research framework, we contribute to a better understanding of the concepts of vulnerability and robustness and related issues in food SCs. Third, we identified the relationship between contextual factors of food SCs and specific DMPs used to maintain robust SC performances: characteristics of the product and the SC environment influence the selection and use of DMPs; processes and SC networks are influenced by DMPs. Fourth, we developed specific metrics for vulnerability assessments, which serve as a basis of a VULA method. The VULA method investigates different measures of the variability of both the duration of impacts from disturbances and the fluctuations in their magnitude.
With this project, we also hope to have delivered practical insights into food SC vulnerability. First, the integrated framework for the design of robust SCs can be used to guide food companies in successful disturbance management. Second, empirical findings from case studies lead to the identification of changeable characteristics of SCs that can serve as a basis for assessing where to focus efforts to manage disturbances. Third, the VULA method can help top management to get more reliable information about the “health” of the company.
The two most important research opportunities are: First, there is a need to extend and validate our findings related to the research framework and contextual factors through further case studies related to other types of (food) products and other types of SCs. Second, there is a need to further develop and test the VULA method, e.g.: to use other indicators and statistical measures for disturbance detection and SC improvement; to define the most appropriate KPI to represent the robustness of a complete SC. We hope this thesis invites other researchers to pick up these challenges and help us further improve the robustness of (food) SCs.
Resumo:
Coordination among supply chain members is essential for better supply chain performance. An effective method to improve supply chain coordination is to implement proper coordination mechanisms. The primary objective of this research is to study the performance of a multi-level supply chain while using selected coordination mechanisms separately, and in combination, under lost sale and back order cases. The coordination mechanisms used in this study are price discount, delay in payment and different types of information sharing. Mathematical modelling and simulation modelling are used in this study to analyse the performance of the supply chain using these mechanisms. Initially, a three level supply chain consisting of a supplier, a manufacturer and a retailer has been used to study the combined effect of price discount and delay in payment on the performance (profit) of supply chain using mathematical modelling. This study showed that implementation of individual mechanisms improves the performance of the supply chain compared to ‘no coordination’. When more than one mechanism is used in combination, performance in most cases further improved. The three level supply chain considered in mathematical modelling was then extended to a three level network supply chain consisting of a four retailers, two wholesalers, and a manufacturer with an infinite part supplier. The performance of this network supply chain was analysed under both lost sale and backorder cases using simulation modelling with the same mechanisms: ‘price discount and delay in payment’ used in mathematical modelling. This study also showed that the performance of the supply chain is significantly improved while using combination of mechanisms as obtained earlier. In this study, it is found that the effect (increase in profit) of ‘delay in payment’ and combination of ‘price discount’ & ‘delay in payment’ on SC profit is relatively high in the case of lost sale. Sensitivity analysis showed that order cost of the retailer plays a major role in the performance of the supply chain as it decides the order quantity of the other players in the supply chain in this study. Sensitivity analysis also showed that there is a proportional change in supply chain profit with change in rate of return of any player. In the case of price discount, elasticity of demand is an important factor to improve the performance of the supply chain. It is also found that the change in permissible delay in payment given by the seller to the buyer affects the SC profit more than the delay in payment availed by the buyer from the seller. In continuation of the above, a study on the performance of a four level supply chain consisting of a manufacturer, a wholesaler, a distributor and a retailer with ‘information sharing’ as coordination mechanism, under lost sale and backorder cases, using a simulation game with live players has been conducted. In this study, best performance is obtained in the case of sharing ‘demand and supply chain performance’ compared to other seven types of information sharing including traditional method. This study also revealed that effect of information sharing on supply chain performance is relatively high in the case of lost sale than backorder. The in depth analysis in this part of the study showed that lack of information sharing need not always be resulting in bullwhip effect. Instead of bullwhip effect, lack of information sharing produced a huge hike in lost sales cost or backorder cost in this study which is also not favorable for the supply chain. Overall analysis provided the extent of improvement in supply chain performance under different cases. Sensitivity analysis revealed useful insights about the decision variables of supply chain and it will be useful for the supply chain management practitioners to take appropriate decisions.