944 resultados para Discrete event simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on an attempt to apply Genetic Algorithms to the problem of optimising a complex system, through discrete event simulation (Simulation Optimisation), with a view to reducing the noise associated with such a procedure. We are applying this proposed solution approach to our application test bed, a Crossdocking distribution centre, because it provides a good representative of the random and unpredictable behaviour of complex systems i.e. automated machine random failure and the variability of manual order picker skill. It is known that there is noise in the output of discrete event simulation modelling. However, our interest focuses on the effect of noise on the evaluation of the fitness of candidate solutions within the search space, and the development of techniques to handle this noise. The unique quality of our proposed solution approach is we intend to embed a noise reduction technique in our Genetic Algorithm based optimisation procedure, in order for it to be robust enough to handle noise, efficiently estimate suitable fitness function, and produce good quality solutions with minimal computational effort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automotive producers are aiming to make their order fulfilment processes more flexible. Opening the pipeline of planned products for dynamic allocation to dealers/ customers is a significant step to be more flexible but the behaviour of such Virtual-Build-To-Order systems are complex to predict and their performance varies significantly as product variety levels change. This study investigates the potential for intelligent control of the pipeline feed, taking into account the current status of inventory (level and mix) and of the volume and mix of unsold products in the planning pipeline, as well as the demand profile. Five ‘intelligent’ methods for selecting the next product to be planned into the production pipeline are analysed using a discrete event simulation model and compared to the unintelligent random feed. The methods are tested under two conditions, firstly when customers must be fulfilled with the exact product they request, and secondly when customers trade-off a shorter waiting time for compromise in specification. The two forms of customer behaviour have a substantial impact on the performance of the methods and there are also significant differences between the methods themselves. When the producer has an accurate model of customer demand, methods that attempt to harmonise the mix in the system to the demand distribution are superior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on continuing research into the modelling of an order picking process within a Crossdocking distribution centre using Simulation Optimisation. The aim of this project is to optimise a discrete event simulation model and to understand factors that affect finding its optimal performance. Our initial investigation revealed that the precision of the selected simulation output performance measure and the number of replications required for the evaluation of the optimisation objective function through simulation influences the ability of the optimisation technique. We experimented with Common Random Numbers, in order to improve the precision of our simulation output performance measure, and intended to use the number of replications utilised for this purpose as the initial number of replications for the optimisation of our Crossdocking distribution centre simulation model. Our results demonstrate that we can improve the precision of our selected simulation output performance measure value using Common Random Numbers at various levels of replications. Furthermore, after optimising our Crossdocking distribution centre simulation model, we are able to achieve optimal performance using fewer simulations runs for the simulation model which uses Common Random Numbers as compared to the simulation model which does not use Common Random Numbers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data flow computers are high-speed machines in which an instruction is executed as soon as all its operands are available. This paper describes the EXtended MANchester (EXMAN) data flow computer which incorporates three major extensions to the basic Manchester machine. As extensions we provide a multiple matching units scheme, an efficient, implementation of array data structure, and a facility to concurrently execute reentrant routines. A simulator for the EXMAN computer has been coded in the discrete event simulation language, SIMULA 67, on the DEC 1090 system. Performance analysis studies have been conducted on the simulated EXMAN computer to study the effectiveness of the proposed extensions. The performance experiments have been carried out using three sample problems: matrix multiplication, Bresenham's line drawing algorithm, and the polygon scan-conversion algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Jarraian, hainbat hilabetetan zehar garatutako proiektuaren deskribapena biltzen duen memoria dugu eskuragarri. Proiektu hau, sistema konkurrenteen simulazioan zentratzen da eta horretarako, mota honetako sistemen arloan hain erabiliak diren Petri Sareak lantzeaz gain, simulatzaile bat programatzeko informazio nahikoa ere barneratzen ditu. Gertaera diskretuko simulatzaile estatistiko batean oinarrituko da proiektuaren garapena, helburua izanik Petri Sareen bidez formalizatzen diren sistemak simulatzeko softwarea osatzea. Proiektuaren helburua da objektuetara zuzendutako hizkuntzaren bidez, Java hizkuntzaren bidez alegia, simulatzailearen programazioa erraztea eta ingurune honen baliabideak erabiltzea, bereziki XML teknologiari lotutakoak. Proiektu hau, bi zati nagusitan banatzen dela esan daiteke. Lehenengo zatiari dagokionez, konputazio munduan simulazioa aurkeztu eta honi buruzko behar adina informazio emango da. Hau, oso erabilgarria izango da programatuko den simulatzailearen nondik norakoak ulertu eta klase desberdinen inplementazioa egin ahal izateko. Horrez gain, zorizko aldagaiak eta hauen simulazioa ere islatzen dira, simulazio prozesu hori ahalik eta era errealean gauzatzeko helburuarekin. Ondoren, Petri Sareak aurkeztuko dira, hauen ezaugarri eta sailkapen desberdinak goraipatuz. Gainera, Petri Sareak definitzeko XML lengoaia erabiliko denez, mota honetako dokumentu eta eskemak aztertuko dira, hauek, garatuko den aplikazioaren oinarri izango direlarik. Bestalde, aplikazioaren muin izango diren klaseen diseinu eta inplementazioak bildu dira azken aurreko kapituluan. Alde batetik, erabili den DOM egituraren inguruko informazioa islatzen da eta bestetik, XML-tik habiatuz lortuko diren PetriNet instantziak maneiatzeko ezinbestekoak diren Java klaseen kodeak erakusten dira. Amaitzeko, egileak ateratako ondorioez gain, proiektuaren garapen prozesuan erabili den bibliografiaren berri ere ematen da.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O gerenciamento do tempo nos projetos de construção civil usualmente utiliza algoritmos determinísticos para o cálculo dos prazos de finalização e algoritmos PERT para avaliação da probabilidade de o mesmo terminar até uma determinada data. Os resultados calculados pelos algoritmos tradicionais possuem defasagens nos prazos se comparados aos encontrados na realidade o que vem fazendo com que a simulação venha se tornando uma ferramenta cada vez mais utilizada no gerenciamento de projetos. O objetivo da dissertação é estudar o problema dos prazos de finalização dos projetos desenvolvendo novas técnicas de cálculo que reflitam melhor os prazos encontrados na vida real. A partir disso é criada uma ferramenta prática de gerenciamento do tempo de atividades de projetos de construção enxuta baseada em planilha eletrônica onde serão utilizadas técnicas de simulação a eventos discretos, com base em distribuições de probabilidade como, por exemplo, a distribuição beta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

以某汽车变速箱装配生产线制造系统为背景,应用多Agent制造及Holon制造模式改造传统装配生产线以提升其柔性与重构能力·针对基于agent与holon混合思想的可重构装配生产线的基础框架与实现等理论提供分析验证环境,提出应用数字制造技术构建面向可重构装配线的数字仿真验证平台·在分析面向重构装配线的仿真平台功能特征的基础上,构建了数字仿真验证平台的框架·研究了仿真平台开发中的系统集成、可视化仿真、可重构装配线性能分析等关键技术,最后给出了仿真平台的实例系统·

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. METHODS: Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. RESULTS: Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. CONCLUSIONS: This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective Within the framework of a health technology assessment and using an economic model, to determine the most clinically and cost effective policy of scanning and screening for fetal abnormalities in early pregnancy. Design A discrete event simulation model of 50,000 singleton pregnancies. Setting Maternity services in Scotland. Population Women during the first 24 weeks of their pregnancy. Methods The mathematical model was populated with data on uptake of screening, prevalence, detection and false positive rates for eight fetal abnormalities and with costs for ultrasound scanning and serum screening. Inclusion of abnormalities was based on the relative prevalence and clinical importance of conditions and the availability of data. Six strategies for the identification of abnormalities prenatally including combinations of first and second trimester ultrasound scanning and first and second trimester screening for chromosomal abnormalities were compared. Main outcome measures The number of abnormalities detected and missed, the number of iatrogenic losses resulting from invasive tests, the total cost of strategies and the cost per abnormality detected were compared between strategies. Results First trimester screening for chromosomal abnormalities costs more than second trimester screening but results in fewer iatrogenic losses. Strategies which include a second trimester ultrasound scan result in more abnormalities being detected and have lower costs per anomaly detected. Conclusions The preferred strategy includes both first and second trimester ultrasound scans and a first trimester screening test for chromosomal abnormalities. It has been recommended that this policy is offered to all women in Scotland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High effectiveness and leanness of modern supply chains (SCs) increase their vulnerability, i.e. susceptibility to disturbances reflected in non-robust SC performances. Both the SC management literature and SC professionals indicate the need for the development of SC vulnerability assessment tools. In this article, a new method for vulnerability assessment, the VULA method, is presented. The VULA method helps to identify how much a company would underperform on a specific Key Performance Indicator in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision about whether process redesign is appropriate and what kind of redesign strategies should be used in order to increase the SC's robustness. The applicability of the VULA method is demonstrated in the context of a meat SC using discrete-event simulation to conduct the performance analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The operation of supply chains (SCs) has for many years been focused on efficiency, leanness and responsiveness. This has resulted in reduced slack in operations, compressed cycle times, increased productivity and minimised inventory levels along the SC. Combined with tight tolerance settings for the realisation of logistics and production processes, this has led to SC performances that are frequently not robust. SCs are becoming increasingly vulnerable to disturbances, which can decrease the competitive power of the entire chain in the market. Moreover, in the case of food SCs non-robust performances may ultimately result in empty shelves in grocery stores and supermarkets.
The overall objective of this research is to contribute to Supply Chain Management (SCM) theory by developing a structured approach to assess SC vulnerability, so that robust performances of food SCs can be assured. We also aim to help companies in the food industry to evaluate their current state of vulnerability, and to improve their performance robustness through a better understanding of vulnerability issues. The following research questions (RQs) stem from these objectives:
RQ1: What are the main research challenges related to (food) SC robustness?
RQ2: What are the main elements that have to be considered in the design of robust SCs and what are the relationships between these elements?
RQ3: What is the relationship between the contextual factors of food SCs and the use of disturbance management principles?
RQ4: How to systematically assess the impact of disturbances in (food) SC processes on the robustness of (food) SC performances?
To answer these RQs we used different methodologies, both qualitative and quantitative. For each question, we conducted a literature survey to identify gaps in existing research and define the state of the art of knowledge on the related topics. For the second and third RQ, we conducted both exploration and testing on selected case studies. Finally, to obtain more detailed answers to the fourth question, we used simulation modelling and scenario analysis for vulnerability assessment.
Main findings are summarised as follows.
Based on an extensive literature review, we answered RQ1. The main research challenges were related to the need to define SC robustness more precisely, to identify and classify disturbances and their causes in the context of the specific characteristics of SCs and to make a systematic overview of (re)design strategies that may improve SC robustness. Also, we found that it is useful to be able to discriminate between varying degrees of SC vulnerability and to find a measure that quantifies the extent to which a company or SC shows robust performances when exposed to disturbances.
To address RQ2, we define SC robustness as the degree to which a SC shows an acceptable performance in (each of) its Key Performance Indicators (KPIs) during and after an unexpected event that caused a disturbance in one or more logistics processes. Based on the SCM literature we identified the main elements needed to achieve robust performances and structured them together to form a conceptual framework for the design of robust SCs. We then explained the logic of the framework and elaborate on each of its main elements: the SC scenario, SC disturbances, SC performance, sources of food SC vulnerability, and redesign principles and strategies.
Based on three case studies, we answered RQ3. Our major findings show that the contextual factors have a consistent relationship to Disturbance Management Principles (DMPs). The product and SC environment characteristics are contextual factors that are hard to change and these characteristics initiate the use of specific DMPs as well as constrain the use of potential response actions. The process and the SC network characteristics are contextual factors that are easier to change, and they are affected by the use of the DMPs. We also found a notable relationship between the type of DMP likely to be used and the particular combination of contextual factors present in the observed SC.
To address RQ4, we presented a new method for vulnerability assessments, the VULA method. The VULA method helps to identify how much a company is underperforming on a specific Key Performance Indicator (KPI) in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision maker about whether process redesign is needed and what kind of redesign strategies should be used in order to increase the SC’s robustness. The VULA method is demonstrated in the context of a meat SC using discrete-event simulation. The case findings show that performance robustness can be assessed for any KPI using the VULA method.
To sum-up the project, all findings were incorporated within an integrated framework for designing robust SCs. The integrated framework consists of the following steps: 1) Description of the SC scenario and identification of its specific contextual factors; 2) Identification of disturbances that may affect KPIs; 3) Definition of the relevant KPIs and identification of the main disturbances through assessment of the SC performance robustness (i.e. application of the VULA method); 4) Identification of the sources of vulnerability that may (strongly) affect the robustness of performances and eventually increase the vulnerability of the SC; 5) Identification of appropriate preventive or disturbance impact reductive redesign strategies; 6) Alteration of SC scenario elements as required by the selected redesign strategies and repeat VULA method for KPIs, as defined in Step 3.
Contributions of this research are listed as follows. First, we have identified emerging research areas - SC robustness, and its counterpart, vulnerability. Second, we have developed a definition of SC robustness, operationalized it, and identified and structured the relevant elements for the design of robust SCs in the form of a research framework. With this research framework, we contribute to a better understanding of the concepts of vulnerability and robustness and related issues in food SCs. Third, we identified the relationship between contextual factors of food SCs and specific DMPs used to maintain robust SC performances: characteristics of the product and the SC environment influence the selection and use of DMPs; processes and SC networks are influenced by DMPs. Fourth, we developed specific metrics for vulnerability assessments, which serve as a basis of a VULA method. The VULA method investigates different measures of the variability of both the duration of impacts from disturbances and the fluctuations in their magnitude.
With this project, we also hope to have delivered practical insights into food SC vulnerability. First, the integrated framework for the design of robust SCs can be used to guide food companies in successful disturbance management. Second, empirical findings from case studies lead to the identification of changeable characteristics of SCs that can serve as a basis for assessing where to focus efforts to manage disturbances. Third, the VULA method can help top management to get more reliable information about the “health” of the company.
The two most important research opportunities are: First, there is a need to extend and validate our findings related to the research framework and contextual factors through further case studies related to other types of (food) products and other types of SCs. Second, there is a need to further develop and test the VULA method, e.g.: to use other indicators and statistical measures for disturbance detection and SC improvement; to define the most appropriate KPI to represent the robustness of a complete SC. We hope this thesis invites other researchers to pick up these challenges and help us further improve the robustness of (food) SCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.

DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.

SETTING: Primary and secondary care.

PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).

INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.

MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).

RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.

LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.

CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.

FUNDING: The National Institute for Health Research Health Technology Assessment Programme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of efficient assembly systems can significantly contribute to the profitability of products and the competitiveness of manufacturing industries. The configuration of a an efficient assembly line can be supported by suitable methodologies and techniques, such as design for manufacture and assembly, assembly sequence planning, assembly line balancing, lean manufacturing and optimization techniques. In this paper, these methods are applied with reference to the industrial case study of the assembly line of a Skycar light aircraft. The assembly process sequence is identified taking into account the analysis of the assembly structure and the required precedence constraints, and diverse techniques are applied to optimize the assembly line performance. Different line configurations are verified through discrete event simulation to assess the potential increase of efficiency and throughput in a digital environment and propose the most suitable configuration of the assembly line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Integration Insight provides a brief overview of the most popular modelling techniques used to analyse complex real-world problems, as well as some less popular but highly relevant techniques. The modelling methods are divided into three categories, with each encompassing a number of methods, as follows: 1) Qualitative Aggregate Models (Soft Systems Methodology, Concept Maps and Mind Mapping, Scenario Planning, Causal (Loop) Diagrams), 2) Quantitative Aggregate Models (Function fitting and Regression, Bayesian Nets, System of differential equations / Dynamical systems, System Dynamics, Evolutionary Algorithms) and 3) Individual Oriented Models (Cellular Automata, Microsimulation, Agent Based Models, Discrete Event Simulation, Social Network
Analysis). Each technique is broadly described with example uses, key attributes and reference material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the efficiency of alternative monitoring services for people with ocular hypertension (OHT), a glaucoma risk factor.

DESIGN: Discrete event simulation model comparing five alternative care pathways: treatment at OHT diagnosis with minimal monitoring; biennial monitoring (primary and secondary care) with treatment if baseline predicted 5-year glaucoma risk is ≥6%; monitoring and treatment aligned to National Institute for Health and Care Excellence (NICE) glaucoma guidance (conservative and intensive).

SETTING: UK health services perspective.

PARTICIPANTS: Simulated cohort of 10 000 adults with OHT (mean intraocular pressure (IOP) 24.9 mm Hg (SD 2.4).

MAIN OUTCOME MEASURES: Costs, glaucoma detected, quality-adjusted life years (QALYs).

RESULTS: Treating at diagnosis was the least costly and least effective in avoiding glaucoma and progression. Intensive monitoring following NICE guidance was the most costly and effective. However, considering a wider cost-utility perspective, biennial monitoring was less costly and provided more QALYs than NICE pathways, but was unlikely to be cost-effective compared with treating at diagnosis (£86 717 per additional QALY gained). The findings were robust to risk thresholds for initiating monitoring but were sensitive to treatment threshold, National Health Service costs and treatment adherence.

CONCLUSIONS: For confirmed OHT, glaucoma monitoring more frequently than every 2 years is unlikely to be efficient. Primary treatment and minimal monitoring (assessing treatment responsiveness (IOP)) could be considered; however, further data to refine glaucoma risk prediction models and value patient preferences for treatment are needed. Consideration to innovative and affordable service redesign focused on treatment responsiveness rather than more glaucoma testing is recommended.