950 resultados para cycle time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We localized the multicopy plasmid RK2 in Escherichia coli and found that the number of fluorescent foci observed in each cell was substantially less than the copy number of the plasmid, suggesting that many copies of RK2 are grouped into a few multiplasmid clusters. In minimal glucose media, the majority of cells had one or two foci, with a single focus localized near midcell, and two foci near the 1/4 and 3/4 cell positions. The number of foci per cell increased with cell length and with growth rate, and decreased upon entering stationary phase, suggesting a coordination of RK2 replication or segregation with the bacterial cell cycle. Time-lapse microscopy demonstrated that partitioning of RK2 foci is achieved by the splitting of a single focus into two or three smaller foci, which are capable of separating with rapid kinetics. A derivative of the high-copy-number plasmid pUC19 containing the lacO array was also localized by tagging with GFP-LacI. Whereas many of the cells contained numerous, randomly diffusing foci, most cells exhibited one or two plasmid clusters located at midcell or the cell quarter positions. Our results suggest a model in which multicopy plasmids are not always randomly diffusing throughout the cell as previously thought, but can be replicated and partitioned in clusters targeted to specific locations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente estudo investigou a aplicação de dois tipos de AnSBBR (reatores anaeróbio com biofilme e operados em batelada e batelada alimentada sequenciais: com recirculação da fase líquida e com agitação) para produção de biohidrogênio tratando água residuária sintética (a base de soro de leite e lactose, respectivamente). O AnSBBR com recirculação da fase líquida, que foi o estudo principal do presente trabalho, apresentou problemas na produção de hidrogênio utilizando soro de leite como substrato. Algumas alternativas, como adaptação da biomassa com substratos puros de degradação mais fácil, controle do pH em valores muito baixos e diferentes formas de inoculação foram testadas, entretanto, sem obtenção de sucesso. A solução do problema foi obtida ao refrigerar o meio de alimentação a 4ºC para evitar a fermentação no frasco de armazenamento, retirar a ureia e a suplementação de nutrientes, e realizar lavagens periódicas do material suporte para retirada de parte da biomassa. Dessa forma eliminaram-se indícios de produção de H2S por possível ação de bactérias redutoras de sulfato (BRS) e atingiu-se uma produção estável de hidrogênio sem, entretanto, eliminar completamento o metano, que foi produzido em baixas concentrações. Depois de atingida a estabilidade, investigou-se a influência da concentração afluente de substrato, do tempo de enchimento e da temperatura na produção de biohidrogênio no AnSBBR com recirculação da fase líquida tratando soro de leite. O estudo da concentração afluente apresentou um ponto ótimo para a concentração de 5400 mgDQO.L-1, atingindo valores de 0,80 mol H2.mol-1 lactose e de 660 mL H2.L-1.d-1. O estudo do tempo de enchimento apresentou resultados similares para as condições analisadas. Com relação à temperatura, os melhores resultados foram obtidos com a temperatura mais baixa testada de 15ºC (1,12 mol H2.mol lactose-1 e 1080 mL H2.L-1.d-1), sendo que na temperatura mais alta testada (45°C) não ocorreu produção de hidrogênio. Para o AnSBBR com agitação mecânica, que foi um estudado complementar realizado pelo fato da lactose ser o principal complemento do soro de leite, o desempenho do biorreator foi avaliado de acordo com influência conjunta do tempo de ciclo (tC – 2, 3 e 4 h), da concentração afluente (CSTA – 3600-5400 mgDQO.L-1) e da carga orgânica volumétrica aplicada (COAV – 9,3, 12,3, 13,9, 18,5 e 27,8 mgDQO.L-1.d-1). Foram obtidos excelentes resultados: consumos de carboidratos (lactose), com valores médios sempre acima de 90% e uma produção estável de biohidrogênio em todas as condições estudadas, com metano em baixas concentrações apenas na condição de maior COAV. A diminuição do tC apresentou tendência clara de melhora sobre o RMCRC,n (rendimento molar entre hidrogênio produzido e carboidrato removido) apenas para as condições com menor concentração CSTA, havendo uma relação direta entre CSTA, e RMCRC,n em todos os valores de tC, exceto para o tempo de ciclo de 3 h, exatamente onde ocorreu produção de metano. O melhor valor de RMCRC,n obtido na operação com lactose (1,65 mol H2.mol Carboidrato-1) foi superior aos obtidos em outros trabalhos utilizando a mesma configuração de reator e sacarose como substrato. As análises filogenéticas mostraram que a maioria dos clones analisados foi semelhante à Clostridium. Além destes, clones filogeneticamente semelhantes com a Família Lactobacilaceae, especificamente Lactobacillus rhamnosus foram observados em menor porcentagem no reator, assim como clones com sequências semelhantes a Acetobacter indonesiensis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reflecting the natural biology of mass spawning fish aquaculture production of fish larvae is often hampered by high and unpredictable mortality rates. The present study aimed to enhance larval performance and immunity via the oral administration of an immunomodulator, beta-glucan (MacroGard®) in turbot (Scophthalmus maximus). Rotifers (Brachionus plicatilis) were incubated with or without yeast beta-1,3/1,6-glucan in form of MacroGard® at a concentration of 0.5 g/L. Rotifers were fed to first feeding turbot larvae once a day. From day 13 dph onwards all tanks were additionally fed untreated Artemia sp. nauplii (1 nauplius ml/L). Daily mortality was monitored and larvae were sampled at 11 and 24 dph for expression of 30 genes, trypsin activity and size measurements. Along with the feeding of beta-glucan daily mortality was significantly reduced by ca. 15% and an alteration of the larval microbiota was observed. At 11 dph gene expression of trypsin and chymotrypsin was elevated in the MacroGard® fed fish, which resulted in heightened tryptic enzyme activity. No effect on genes encoding antioxidative proteins was observed, whilst the immune response was clearly modulated by beta-glucan. At 11 dph complement component c3 was elevated whilst cytokines, antimicrobial peptides, toll like receptor 3 and heat shock protein 70 were not affected. At the later time point (24 dph) an anti-inflammatory effect in form of a down-regulation of hsp 70, tnf-alpha and il-1beta was observed. We conclude that the administration of beta-glucan induced an immunomodulatory response and could be used as an effective measure to increase survival in rearing of turbot.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Physical distribution plays an imporant role in contemporary logistics management. Both satisfaction level of of customer and competitiveness of company can be enhanced if the distribution problem is solved optimally. The multi-depot vehicle routing problem (MDVRP) belongs to a practical logistics distribution problem, which consists of three critical issues: customer assignment, customer routing, and vehicle sequencing. According to the literatures, the solution approaches for the MDVRP are not satisfactory because some unrealistic assumptions were made on the first sub-problem of the MDVRP, ot the customer assignment problem. To refine the approaches, the focus of this paper is confined to this problem only. This paper formulates the customer assignment problem as a minimax-type integer linear programming model with the objective of minimizing the cycle time of the depots where setup times are explicitly considered. Since the model is proven to be MP-complete, a genetic algorithm is developed for solving the problem. The efficiency and effectiveness of the genetic algorithm are illustrated by a numerical example.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to conceptualize e-business adoption and to generate understanding of the range of factors affecting the adoption process. The paper also aims at exploring the perceived impact of e-business adoption on logistics-related processes. Design/methodology/approach: Case study research, by conducting in-depth interviews in eight companies. Findings: E-business adoption is not exclusively a matter of resources. Increased e-business adoption and impact are caused by increased operational compatibility, as well as increased levels of collaboration. In terms of e-business impact this mainly refers to cycle time reductions and quality improvements, rather than direct cost reductions as reported by other authors. Research limitations/implications: The intrinsic weakness of the research method and the way concepts are operationalized limits the ability to generalize findings. Practical implications: Managers should emphasize developing their relationships with theirsuppliers/customers, in an effort to do common e-business investments, and should aim to increase their partners' commitment to the use of these applications. Originality/value: This paper provides empirical evidence from a sector where limited research efforts have taken place. Explanations can be helpful to other researchers involved in the understanding of the adoption of e-business and its impact. © Emerald Group Publishing Limited.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Systems Engineering Group (SEG) at De Montfort University are developing the Boardman Soft Systems Methodology (BSSM) which allows complex human systems to be modelled, this work builds upon Checkland's Soft Systems Methodology (1981). The BSSM has been applied to the modelling of the systems engineering process as used in design and manufacturing companies. The BSSM is used to solicit information from a company and this data is then transformed into systemic diagrams (systemigrams). These systemigrams are posited to be accurate and concise representations of the system which has been modelled. This paper describes the collaboration between SEG and a manufacturing company (MC) in Leicester, England. The purpose of this collaboration was twofold. First, it was to create an objective view of the MC's processes, in the form of systemigrams. It was important to get this modelled by a source outside of the company, as it is difficult for people within a system being modelled to be unbiased. Secondly, it allowed a series of systemigrams to be produced which can then be subjected to simulation, for the purpose of aiding risk management decisions and to reduce the project cycle time

Relevância:

60.00% 60.00%

Publicador:

Resumo:

If product cycle time reduction is the mission, and the multifunctional team is the means of achieving the mission, what then is the modus operandi by which this means is to accomplish its mission? This paper asserts that a preferred modus operandi for the multifunctional team is to adopt a process-oriented view of the manufacturing enterprise, and for this it needs the medium of a process map [16] The substance of this paper is a methodology which enables the creation of such maps Specific examples of process models drawn from the product develop ment life cycle are presented and described in order to support the methodology's integrity and value The specific deliverables we have so far obtained are a methodology for process capture and analysis, a collection of process models spanning the product development cycle, and, an engineering handbook which hosts these models and presents a computer-based means of navigating through these processes in order to allow users a better understanding of the nature of the business, their role in it, and why the job that they do benefits the work of the company We assert that this kind of thinking is the essence of concurrent engineering implementation, and further that the systemigram process models uniquely stim ulate and organise such thinking.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Competition in the 3PL market continues to intensify as providers compete to win and retain clients. 3PL providers are required to reduce costs while offering tailored innovative logistical solutions in order to remain competitive. 3PL providers can reduce costs through the consolidation of assets and introduction of cross-docking activities. Innovative logistical services can be tailored to each client via the introduction of real-time data updates. This paper highlights that RFID enabled RTE can assist in improvements of both these areas through increased network visibility. A framework is presented where the 3PL provider focuses on asset reduction, asset utilisation, real-time data employment and RTE cycle time reduction in order to enhance competitiveness. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fierce competition within the third party logistics (3PL) market has developed as providers compete to win customers and enhance their competitive advantage through cost reduction plans and creating service differentiation. 3PL providers are expected to develop advanced technological and logistical service applications that can support cost reduction while increasing service innovation. To enhance competitiveness, this paper proposes the implementation of radio-frequency identification (RFID) enabled returnable transport equipment (RTE) in combination with the consolidation of network assets and cross-docking. RFID enabled RTE can significantly improve network visibility of all assets with continuous real-time data updates. A four-level cyclic model aiding 3PL providers to achieve competitive advantage has been developed. The focus is to reduce assets, increase asset utilisation, reduce RTE cycle time and introduce real-time data in the 3PL network. Furthermore, this paper highlights the need for further research from the 3PL perspective. Copyright © 2013 Inderscience Enterprises Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two assembly line balancing problems are addressed. The first problem (called SALBP-1) is to minimize number of linearly ordered stations for processing n partially ordered operations V = {1, 2, ..., n} within the fixed cycle time c. The second problem (called SALBP-2) is to minimize cycle time for processing partially ordered operations V on the fixed set of m linearly ordered stations. The processing time ti of each operation i ∈V is known before solving problems SALBP-1 and SALBP-2. However, during the life cycle of the assembly line the values ti are definitely fixed only for the subset of automated operations V\V . Another subset V ⊆ V includes manual operations, for which it is impossible to fix exact processing times during the whole life cycle of the assembly line. If j ∈V , then operation times tj can differ for different cycles of the production process. For the optimal line balance b of the assembly line with operation times t1, t2, ..., tn, we investigate stability of its optimality with respect to possible variations of the processing times tj of the manual operations j ∈ V .

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents for the first time the concept of measurement assisted assembly (MAA) and outlines the research priorities of the realisation of this concept in the industry. MAA denotes a paradigm shift in assembly for high value and complex products and encompasses the development and use of novel metrology processes for the holistic integration and capability enhancement of key assembly and ancillary processes. A complete framework for MAA is detailed showing how this can facilitate a step change in assembly process capability and efficiency for large and complex products, such as airframes, where traditional assembly processes exhibit the requirement for rectification and rework, use inflexible tooling and are largely manual, resulting in cost and cycle time pressures. The concept of MAA encompasses a range of innovativemeasurement- assisted processes which enable rapid partto- part assembly, increased use of flexible automation, traceable quality assurance and control, reduced structure weight and improved levels of precision across the dimensional scales. A full scale industrial trial of MAA technologies has been carried out on an experimental aircraft wing demonstrating the viability of the approach while studies within 140 smaller companies have highlighted the need for better adoption of existing process capability and quality control standards. The identified research priorities for MAA include the development of both frameless and tooling embedded automated metrology networks. Other research priorities relate to the development of integrated dimensional variation management, thermal compensation algorithms as well as measurement planning and inspection of algorithms linking design to measurement and process planning. © Springer-Verlag London 2013.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^