743 resultados para Research performance
Resumo:
Il est évalué qu’entre 15 et 40 % des élèves surdoués performent à des niveaux inférieurs à leur potentiel (Seeley, 1994). Récemment, plusieurs chercheurs se sont plongés dans ce phénomène en tentant de l’expliquer. Par contre, une seule équipe de recherche s’est attardé sur l’impact des climats socioéducatifs sur la performance des surdoués, alors que ce lien est bien existant chez les jeunes non-doués. Cette recherche vise à explorer la relation possible entre les climats socioéducatifs, l’environnement familial et la performance scolaire chez les jeunes surdoués, afin de combler ce manque dans la littérature. Afin d’atteindre cet objectif, un échantillon de 1 885 participants de 3e secondaire provenant de la base de donnée de l’évaluation du programme SIAA furent étudiés afin de créer un modèle présentant l’effet des environnements sur la performance des jeunes surdoués, médié par la motivation. Nos résultats suggèrent que l’impact des environnements socioéducatifs et familiaux sur la performance des jeunes surdoués est minime, bien qu’il soit majeur chez les jeunes non-doués.
Resumo:
Plusieurs études ont démontré que la consommation d’un repas à indice glycémique bas (LGI) avant un exercice physique favorisait l’oxydation des lipides, diminuait les épisodes d’hypoglycémie et améliorait la performance. Par contre, d’autres recherches n’ont pas observé ces bénéfices. Le but de cette étude consiste à démontrer l’impact de l’ingestion de déjeuners avec différents indices glycémiques (IG) sur la performance en endurance de cyclistes de haut niveau. Dix cyclistes masculins ont complété trois protocoles attribués de façon aléatoire, séparés par un intervalle minimal de sept jours. Les trois protocoles consistaient en une épreuve contre la montre, soit trois heures après avoir consommé un déjeuner à indice glycémique bas ou élevé contenant 3 grammes de glucides par kg de poids corporel, soit à compléter l’exercice à jeun. Les résultats obtenus pour le temps de course montrent qu’il n’y pas de différence significative entre les protocoles. Par contre, on observe que le nombre de révolutions par minute (RPM) est significativement plus élevé avec le protocole à indice glycémique élevé (HGI) (94,3 ± 9,9 RPM) comparativement au protocole à jeun (87,7 ± 8,9 RPM) (p<0,005). Au niveau de la glycémie capillaire, on observe que durant l’exercice, au temps 30 minutes, la glycémie est significativement supérieure avec le protocole à jeun (5,47 ± 0,76 mmol/L) comparé à HGI (4,99 ± 0,91 mmol/L) (p<0,002). Notre étude n’a pas permis de confirmer que la consommation d’un repas LGI avant un exercice physique améliorait la performance physique de façon significative. Cependant, notre recherche a démontré que la diversité des protocoles utilisés pour évaluer l’impact de l’indice glycémique sur la performance physique pouvait occasionner des variations positives dans les temps de courses. Par conséquent, il s’avère nécessaire de faire d’autres recherches qui reproduisent les situations de compétition de l’athlète soit la course contre la montre.
Resumo:
Travail dirigé présenté à la Faculté des sciences infirmières en vue de l’obtention du grade de Maître ès sciences (M.Sc.) en sciences infirmières option administration des services infirmiers
Resumo:
La pratique d’activité physique fait partie intégrante des recommandations médicales pour prévenir et traiter les maladies coronariennes. Suivant un programme d’entraînement structuré, serait-il possible d’améliorer la réponse à l’exercice tout en offrant une protection cardiaque au patient? C’est ce que semblent démontrer certaines études sur le préconditionnement ischémique (PCI) induit par un test d’effort maximal. Les mêmes mécanismes physiologiques induits par le PCI sont également observés lorsqu’un brassard est utilisé pour créer des cycles d’ischémie/reperfusion sur un muscle squelettique. Cette méthode est connue sous l’appellation : préconditionnement ischémique à distance (PCID). À l’autre extrémité du spectre de l’activité physique, des sportifs ont utilisé le PCDI durant leur échauffement afin d’améliorer leurs performances. C’est dans l’objectif d’étudier ces prémisses que se sont construits les projets de recherches suivants. La première étude porte sur les effets du PCID sur des efforts supra maximaux de courte durée. Les sujets (N=16) ont exécuté un test alactique (6 * 6 sec. supra maximales) suivi d’un test lactique (30 secondes supra maximales) sur ergocycle. Les sujets avaient été aléatoirement assignés à une intervention PCID ou à une intervention contrôle (CON) avant d’entreprendre les efforts. La procédure PCID consiste à effectuer quatre cycles d’ischémie de cinq minutes à l’aide d’un brassard insufflé à 50 mm Hg de plus que la pression artérielle systolique sur le bras. Les résultats de ce projet démontrent que l’intervention PCID n’a pas d’effets significatifs sur l’amélioration de performance provenant classiquement du « système anaérobie », malgré une légère hausse de la puissance maximal en faveur du PCID sur le test de Wingate de trente secondes (795 W vs 777 W) et sur le test de force-vitesse de six secondes (856 W vs 847 W). Le deuxième essai clinique avait pour objectif d’étudier les effets du PCID, selon la méthode élaborée dans le premier projet, lors d’un effort modéré de huit minutes (75 % du seuil ventilatoire) et un effort intense de huit minutes (115 % du seuil ventilatoire) sur les cinétiques de consommation d’oxygène. Nos résultats démontrent une accélération significative des cinétiques de consommation d’oxygène lors de l’intervention PCID par rapport au CON aux deux intensités d’effort (valeur de τ1 à effort modéré : 27,2 ± 4,6 secondes par rapport à 33,7 ± 6,2, p < 0,01 et intense : 29,9 ± 4,9 secondes par rapport à 33,5 ± 4,1, p < 0,001) chez les sportifs amateurs (N=15). Cela se traduit par une réduction du déficit d’oxygène en début d’effort et une atteinte plus rapide de l’état stable. Le troisième projet consistait à effectuer une revue systématique et une méta-analyse sur la thématique du préconditionnement ischémique (PCI) induit par un test d’effort chez les patients coronariens utilisant les variables provenant de l’électrocardiogramme et des paramètres d’un test d’effort. Notre recherche bibliographique a identifié 309 articles, dont 34 qui ont été inclus dans la méta-analyse, qui représente un lot de 1 053 patients. Nos analyses statistiques démontrent que dans un effort subséquent, les patients augmentent leur temps avant d’atteindre 1 mm de sous-décalage du segment ST de 91 secondes (p < 0,001); le sous-décalage maximal diminue de 0,38 mm (p < 0,01); le double produit à 1 mm de sous-décalage du segment ST augmente de 1,80 x 103 mm Hg (p < 0,001) et le temps total d’effort augmente de 50 secondes (p < 0,001). Nos projets de recherches ont favorisé l’avancement des connaissances en sciences de l’activité physique quant à l’utilisation d’un brassard comme stimulus au PCID avant un effort physique. Nous avons évalué l’effet du PCID sur différentes voies métaboliques à l’effort pour conclure que la méthode pourrait accélérer les cinétiques de consommation d’oxygène et ainsi réduire la plage du déficit d’oxygène. Nos découvertes apportent donc un éclaircissement quant à l’amélioration des performances de type contre-la-montre étudié par d’autres auteurs. De plus, nous avons établi des paramètres cliniques permettant d’évaluer le PCI induit par un test d’effort chez les patients coronariens.
Resumo:
Full Text / Article complet
Resumo:
The present study shows design and development of a performance evaluation prototype for IT organizations in the context of outsourcing. The main objective of this research is to help an IT organization in the context of outsourcing to realize its current standing, so it can take corrective steps where ever necessary and strive for continuous improvement. Service level management (SLM) process plays a crucial role in controlling the quality provision for IT service. Out sourcing is the process of entrusting the responsibility of providing certain goods and services to an external party. We have tried to identify as many as twenty complexities and categorized in to four headings. Complexities associated with contracts and SLAs,SLM process,SLM organization and complexities due to intrinsic characteristics. In this study it is possible to measure the quality of the performance of an IT organization in an outsourcing environment effectively
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.
Resumo:
Residue Number System (RNS) based Finite Impulse Response (FIR) digital filters and traditional FIR filters. This research is motivated by the importance of an efficient filter implementation for digital signal processing. The comparison is done in terms of speed and area requirement for various filter specifications. RNS based FIR filters operate more than three times faster and consumes only about 60% of the area than traditional filter when number of filter taps is more than 32. The area for RNS filter is increasing at a lesser rate than that for traditional resulting in lower power consumption. RNS is a nonweighted number system without carry propogation between different residue digits.This enables simultaneous parallel processing on all the digits resulting in high speed addition and multiplication in the RNS domain
Resumo:
Speech is the most natural means of communication among human beings and speech processing and recognition are intensive areas of research for the last five decades. Since speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. In this work, a speech recognition system is developed for recognizing speaker independent spoken digits in Malayalam. Voice signals are sampled directly from the microphone. The proposed method is implemented for 1000 speakers uttering 10 digits each. Since the speech signals are affected by background noise, the signals are tuned by removing the noise from it using wavelet denoising method based on Soft Thresholding. Here, the features from the signals are extracted using Discrete Wavelet Transforms (DWT) because they are well suitable for processing non-stationary signals like speech. This is due to their multi- resolutional, multi-scale analysis characteristics. Speech recognition is a multiclass classification problem. So, the feature vector set obtained are classified using three classifiers namely, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Naive Bayes classifiers which are capable of handling multiclasses. During classification stage, the input feature vector data is trained using information relating to known patterns and then they are tested using the test data set. The performances of all these classifiers are evaluated based on recognition accuracy. All the three methods produced good recognition accuracy. DWT and ANN produced a recognition accuracy of 89%, SVM and DWT combination produced an accuracy of 86.6% and Naive Bayes and DWT combination produced an accuracy of 83.5%. ANN is found to be better among the three methods.
Resumo:
Eight hundred and eighty-¢ve strains of bacterial isolates fromvarious samples associatedwith the natural habitat ofMacrobrachiumrosenbergii were screened for their probiotic potential. Two putative probionts namely Bacillus NL110 and Vibrio NE17 isolated from the larvae and egg samples, respectively, were selected for experimental studies and were introduced to the juveniles of M. rosenbergii (0.080 0.001g) through di¡erent modes such as through feed, water and both. The probiotic potential of the above bacteria in terms of improvements inwater quality, growth, survival, speci¢c growth rate (SGR), feed conversion ratio and immune parameters was evaluated. The treatment groups showed a signi¢cant improvement in SGR and weight gain (Po0.001). Survival among di¡erent treatment groups was better than that in the control group. There were also signi¢cant improvements in the water quality parameters such as the concentration of nitrate and ammonia in the treatment groups (Po0.05). Improvements in immune parameters such as the total haemocyte count (Po0.05), phenoloxidase activity and respiratory burst were also signi¢cant (Po0.001). It is concluded that screening of the natural micro£ora of cultured ¢sh and shell¢sh for putative probionts might yield probiotic strains of bacteria that could be utilized for an environment-friendly and organic mode of aquaculture.
Resumo:
Eight hundred and eighty-¢ve strains of bacterial isolates fromvarious samples associatedwith the natural habitat ofMacrobrachiumrosenbergii were screened for their probiotic potential. Two putative probionts namely Bacillus NL110 and Vibrio NE17 isolated from the larvae and egg samples, respectively, were selected for experimental studies and were introduced to the juveniles of M. rosenbergii (0.080 0.001g) through di¡erent modes such as through feed, water and both. The probiotic potential of the above bacteria in terms of improvements inwater quality, growth, survival, speci¢c growth rate (SGR), feed conversion ratio and immune parameters was evaluated. The treatment groups showed a signi¢cant improvement in SGR and weight gain (Po0.001). Survival among di¡erent treatment groups was better than that in the control group. There were also signi¢cant improvements in the water quality parameters such as the concentration of nitrate and ammonia in the treatment groups (Po0.05). Improvements in immune parameters such as the total haemocyte count (Po0.05), phenoloxidase activity and respiratory burst were also signi¢cant (Po0.001). It is concluded that screening of the natural micro£ora of cultured ¢sh and shell¢sh for putative probionts might yield probiotic strains of bacteria that could be utilized for an environment-friendly and organic mode of aquaculture
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
A sandwich construction is a special form of the laminated composite consisting of light weight core, sandwiched between two stiff thin face sheets. Due to high stiffness to weight ratio, sandwich construction is widely adopted in aerospace industries. As a process dependent bonded structure, the most severe defects associated with sandwich construction are debond (skin core bond failure) and dent (locally deformed skin associated with core crushing). Reasons for debond may be attributed to initial manufacturing flaws or in service loads and dent can be caused by tool drops or impacts by foreign objects. This paper presents an evaluation on the performance of honeycomb sandwich cantilever beam with the presence of debond or dent, using layered finite element models. Dent is idealized by accounting core crushing in the core thickness along with the eccentricity of the skin. Debond is idealized using multilaminate modeling at debond location with contact element between the laminates. Vibration and buckling behavior of metallic honeycomb sandwich beam with and without damage are carried out. Buckling load factor, natural frequency, mode shape and modal strain energy are evaluated using finite element package ANSYS 13.0. Study shows that debond affect the performance of the structure more severely than dent. Reduction in the fundamental frequencies due to the presence of dent or debond is not significant for the case considered. But the debond reduces the buckling load factor significantly. Dent of size 8-20% of core thickness shows 13% reduction in buckling load capacity of the sandwich column. But debond of the same size reduced the buckling load capacity by about 90%. This underscores the importance of detecting these damages in the initiation level itself to avoid catastrophic failures. Influence of the damages on fundamental frequencies, mode shape and modal strain energy are examined. Effectiveness of these parameters as a damage detection tool for sandwich structure is also assessed
Resumo:
The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping
Resumo:
Coordination among supply chain members is essential for better supply chain performance. An effective method to improve supply chain coordination is to implement proper coordination mechanisms. The primary objective of this research is to study the performance of a multi-level supply chain while using selected coordination mechanisms separately, and in combination, under lost sale and back order cases. The coordination mechanisms used in this study are price discount, delay in payment and different types of information sharing. Mathematical modelling and simulation modelling are used in this study to analyse the performance of the supply chain using these mechanisms. Initially, a three level supply chain consisting of a supplier, a manufacturer and a retailer has been used to study the combined effect of price discount and delay in payment on the performance (profit) of supply chain using mathematical modelling. This study showed that implementation of individual mechanisms improves the performance of the supply chain compared to ‘no coordination’. When more than one mechanism is used in combination, performance in most cases further improved. The three level supply chain considered in mathematical modelling was then extended to a three level network supply chain consisting of a four retailers, two wholesalers, and a manufacturer with an infinite part supplier. The performance of this network supply chain was analysed under both lost sale and backorder cases using simulation modelling with the same mechanisms: ‘price discount and delay in payment’ used in mathematical modelling. This study also showed that the performance of the supply chain is significantly improved while using combination of mechanisms as obtained earlier. In this study, it is found that the effect (increase in profit) of ‘delay in payment’ and combination of ‘price discount’ & ‘delay in payment’ on SC profit is relatively high in the case of lost sale. Sensitivity analysis showed that order cost of the retailer plays a major role in the performance of the supply chain as it decides the order quantity of the other players in the supply chain in this study. Sensitivity analysis also showed that there is a proportional change in supply chain profit with change in rate of return of any player. In the case of price discount, elasticity of demand is an important factor to improve the performance of the supply chain. It is also found that the change in permissible delay in payment given by the seller to the buyer affects the SC profit more than the delay in payment availed by the buyer from the seller. In continuation of the above, a study on the performance of a four level supply chain consisting of a manufacturer, a wholesaler, a distributor and a retailer with ‘information sharing’ as coordination mechanism, under lost sale and backorder cases, using a simulation game with live players has been conducted. In this study, best performance is obtained in the case of sharing ‘demand and supply chain performance’ compared to other seven types of information sharing including traditional method. This study also revealed that effect of information sharing on supply chain performance is relatively high in the case of lost sale than backorder. The in depth analysis in this part of the study showed that lack of information sharing need not always be resulting in bullwhip effect. Instead of bullwhip effect, lack of information sharing produced a huge hike in lost sales cost or backorder cost in this study which is also not favorable for the supply chain. Overall analysis provided the extent of improvement in supply chain performance under different cases. Sensitivity analysis revealed useful insights about the decision variables of supply chain and it will be useful for the supply chain management practitioners to take appropriate decisions.