906 resultados para lot sizing and scheduling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

为了实现冷轧生产线的均衡生产,提出了机组排产作业计划过程中的投料混合比算法。在该算法中,首先根据各道工序机组的生产能力、产品类型、故障和生产过程中的随机干扰等,计算在生产计划期内依概率平均的最佳缓冲区库存量,该库存量能够使机组实现均衡生产;其次,在现有在制库存条件下,考虑生产机组的生产能力和生产的产品类型,提出了本工序机组负荷平衡的机组排产作业计划在线生成方法;最后,结合上述两种方法,利用程序迭代搜索方式求解,既保证本道工序机组负荷平衡,也保证下道工序最佳库存的优化投料混合比,保证了冷轧生产线均衡生产的可行性。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

随着全球市场竞争更加激烈,上层计划管理系统(MRPII/ERP)受市场影响越来越大,计划的适应性问题愈来愈突出,明显感到计划跟不上变化。因此,解决生产计划的适应性以及增加底层生产过程的信息流动,提高计划的实时性和灵活性,已经成为一个重要的研究课题。制造执行系统(MES),作为上层计划和底层控制的桥梁,能够通过传递信息,可使得从订单下达到产品完成的整个生产活动进行优化。依赖准确的数据对工厂活动进行指导、启动、响应和报告。其中,车间级计划与调度管理系统是整个MES系统的核心技术,是现代生产管理的关键问题之一。国内外许多学者对计划调度方法作了大量的研究。 在研究生期间参与了课题组国家863高技术计划项目“面向汽车行业总装过程的可视化监控与执行管理系统”的研究与开发工作,我的主要工作是:以西安法士特齿轮有限公司的装配制造执行系统项目为应用背景,在搭建课题主要内容的SIA-MES运行平台中承担生产事件模型的设计以及软件开发工作,并在SIA-MES运行平台基础上,对混流装配线的调度问题进行了研究,给出了一种缩短生产周期的投产排序算法,设计了混流装配线计划调度系统,独立完成了软件部分的设计与开发工作。并且通过了辽宁省电子信息产品监督检验院的产品测试、得到了他们的认证。 本文的内容主要有以下几方面: 首先,阐述了MES以及制造业混流生产的产生原因及发展现状; 其次,对SIA-MES运行平台做了简单的介绍;并重点阐述了平台基于事件触发的通信机制以及事件对于计划调度系统的支撑。在此基础上,分析制定了计划调度系统的设计目标,而后按此目标完成了系统的功能设计,并对混流装配线调度问题进行了研究,针对计划实施的不同阶段给出不同的优化方法。 最后,介绍了计划调度系统的软件实现。包括基于SOA的软件实现技术,给出了软件系统的总体结构,并对各模块的具体实现进行了简单的介绍,包括类设计以及数据库设计。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientists are faced with a dilemma: either they can write abstract programs that express their understanding of a problem, but which do not execute efficiently; or they can write programs that computers can execute efficiently, but which are difficult to write and difficult to understand. We have developed a compiler that uses partial evaluation and scheduling techniques to provide a solution to this dilemma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose and evaluate an admission control paradigm for RTDBS, in which a transaction is submitted to the system as a pair of processes: a primary task, and a recovery block. The execution requirements of the primary task are not known a priori, whereas those of the recovery block are known a priori. Upon the submission of a transaction, an Admission Control Mechanism is employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its recovery block is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. We describe a number of admission control strategies and contrast (through simulations) their relative performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose and evaluate admission control mechanisms for ACCORD, an Admission Control and Capacity Overload management Real-time Database framework-an architecture and a transaction model-for hard deadline RTDB systems. The system architecture consists of admission control and scheduling components which provide early notification of failure to submitted transactions that are deemed not valuable or incapable of completing on time. In particular, our Concurrency Admission Control Manager (CACM) ensures that transactions which are admitted do not overburden the system by requiring a level of concurrency that is not sustainable. The transaction model consists of two components: a primary task and a compensating task. The execution requirements of the primary task are not known a priori, whereas those of the compensating task are known a priori. Upon the submission of a transaction, the Admission Control Mechanisms are employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its compensating task is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. In that respect, we describe a number of concurrency admission control strategies and contrast (through simulations) their relative performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of inverting experimental data obtained in light scattering experiments described by linear theories. We discuss applications to particle sizing and we describe fast and easy-to-implement algorithms which permit the extraction, from noisy measurements, of reliable information about the particle size distribution. © 1987, SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today’s long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p-i-n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generic architecture for implementing a QR array processor in silicon is presented. This improves on previous research by considerably simplifying the derivation of timing schedules for a QR system implemented as a folded linear array, where account has to be taken of processor cell latency and timing at the detailed circuit level. The architecture and scheduling derived have been used to create a generator for the rapid design of System-on-a-Chip (SoC) cores for QR decomposition. This is demonstrated through the design of a single-chip architecture for implementing an adaptive beamformer for radar applications. Published as IEEE Trans Circuits and Systems Part II, Analog and Digital Signal Processing, April 2003 NOT Express Briefs. Parts 1 and II of Journal reorganised since then into Regular Papers and Express briefs

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential that laser based particle accelerators offer to solve sizing and cost issues arising with conventional proton therapy has generated great interest in the understanding and development of laser ion acceleration, and in investigating the radiobiological effects induced by laser accelerated ions. Laser-driven ions are produced in bursts of ultra-short duration resulting in ultra-high dose rates, and an investigation at Queen's University Belfast was carried out to investigate this virtually unexplored regime of cell rdaiobiology. This employed the TARANIS terawatt laser producing protons in the MeV range for proton irradiation, with dose rates exceeding 10 Gys on a single exposure. A clonogenic assay was implemented to analyse the biological effect of proton irradiation on V79 cells, which, when compared to data obtained with the same cell line irradiated with conventionally accelerated protons, was found to show no significant difference. A Relative Biological effectiveness of 1.4±0.2 at 10 % Survival Fraction was estimated from a comparison with a 225 kVp X-ray source. © 2013 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-core and many-core platforms are becoming increasingly heterogeneous and asymmetric. This significantly increases the porting and tuning effort required for parallel codes, which in turn often leads to a growing gap between peak machine power and actual application performance. In this work a first step toward the automated optimization of high level skeleton-based parallel code is discussed. The paper presents an abstract annotation model for skeleton programs aimed at formally describing suitable mapping of parallel activities on a high-level platform representation. The derived mapping and scheduling strategies are used to generate optimized run-time code. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reports a pilot evaluation of Comfort Care Rounds (CCRs)-a strategy for addressing long-term care home staff's palliative and end-of-life care educational and support needs. Using a qualitative descriptive design, semistructured individual and focus group interviews were conducted to understand staff members' perspectives and feedback on the implementation and application of CCRs. Study participants identified that effective advertising, interest, and assigning staff to attend CCRs facilitated their participation. The key barriers to their attendance included difficulty in balancing heavy workloads and scheduling logistics. Interprofessional team member representation was sought but was not consistent. Study participants recognized the benefits of attending; however, they provided feedback on how the scheduling, content, and focus could be improved. Overall, study participants found CCRs to be beneficial to their palliative and end-of-life care knowledge, practice, and confidence. However, they identified barriers and recommendations, which warrant ongoing evaluation.