934 resultados para Integrated production
Resumo:
Currently, the main source for the production of liquid transportation fuels is petroleum, the continued use of which faces many challenges including depleting oil reserves, significant oil price rises, and environmental concerns over global warming which is widely believed to be due to fossil fuel derived CO2 emissions and other greenhouse gases. In this respect, lignocellulosic or plant biomass is a particularly interesting resource as it is the only renewable source of organic carbon that can be converted into liquid transportation fuels. The gasification of biomass produces syngas which can then be converted into synthetic liquid hydrocarbon fuels by means of the Fischer-Tropsch (FT) synthesis. This process has been widely considered as an attractive option for producing clean liquid hydrocarbon fuels from biomass that have been identified as promising alternatives to conventional fossil fuels like diesel and kerosene. The resulting product composition in FT synthesis is influenced by the type of catalyst and the reaction conditions that are used in the process. One of the issues facing this conversion process is the development of a technology that can be scaled down to match the scattered nature of biomass resources, including lower operating pressures, without compromising liquid composition. The primary aims of this work were to experimentally explore FT synthesis at low pressures for the purpose of process down-scaling and cost reduction, and to investigate the potential for obtaining an intermediate FT synthetic crude liquid product that can be integrated into existing refineries under the range of process conditions employed. Two different fixed-bed micro-reactors were used for FT synthesis; a 2cm3 reactor at the University of Rio de Janeiro (UFRJ) and a 20cm3 reactor at Aston University. The experimental work firstly involved the selection of a suitable catalyst from three that were available. Secondly, a parameter study was carried out on the 20cm3 reactor using the selected catalyst to investigate the influence of reactor temperature, reactor pressure, space velocity, the H2/CO molar ratio in the feed syngas and catalyst loading on the reaction performance measured as CO conversion, catalyst stability, product distribution, product yields and liquid hydrocarbon product composition. From this parameter study a set of preferred operating conditions was identified for low pressure FT synthesis. The three catalysts were characterized using BET, XRD, TPR and SEM. The catalyst selected was an unpromoted Co/Al2O3 catalyst. FT synthesis runs on the 20cm3 reactor at Aston were conducted for 48 hours. Permanent gases and light hydrocarbons (C1-C5) were analysed in an online GC-TCD/FID at hourly intervals. The liquid hydrocarbons collected were analyzed offline using GC-MS for determination of fuel composition. The parameter study showed that CO conversion and liquid hydrocarbon yields increase with increasing reactor pressure up to around 8 bar, above which the effect of pressure is small. The parameters that had the most significant influence on CO conversion, product selectivity and liquid hydrocarbon yields were reactor temperature and catalyst loading. The preferred reaction conditions identified for this research were: T = 230ºC, P = 10 bar, H2/CO = 2.0, WHSV = 2.2 h-1, and catalyst loading = 2.0g. Operation in the low range of pressures studied resulted in low CO conversions and liquid hydrocarbon yields, indicating that low pressure BTL-FT operation may not be industrially viable as the trade off in lower CO conversions and once-through liquid hydrocarbon product yields has to be carefully weighed against the potential cost savings resulting from process operation at lower pressures.
Resumo:
Quality, production and technological innovation management rank among the most important matters of concern to modern manufacturing organisations. They can provide companies with the decisive means of gaining a competitive advantage, especially within industries where there is an increasing similarity in product design and manufacturing processes. The papers in this special issue of International Journal of Technology Management have all been selected as examples of how aspects of quality, production and technological innovation can help to improve competitive performance. Most are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the theme was 'Getting Ahead Through Technology and People'. At the conference itself over 80 papers were presented by authors from 15 countries around the world. Among the many topics addressed within the conference theme, technological innovation, quality and production management emerged as attracting the greatest concern and interest of delegates, particularly those from industry. For any new initiative to be implemented successfully, it should be led from the top of the organization. Achieving the desired level of commitment from top management can, however, be a difficulty. In the first paper of this issue, Mackness investigates this question by explaining how systems thinking can help. In the systems approach, properties such as 'emergence', 'hierarchy', 'commnication' and 'control' are used to assist top managers in preparing for change. Mackness's paper is then complemented by Iijima and Hasegawa's contribution in which they investigate the development of Quality Information Management (QIM) in Japan. They present the idea of a Design Review and demonstrate how it can be used to trace and reduce quality-related losses. The next paper on the subject of quality is by Whittle and colleagues. It relates to total quality and the process of culture change within organisations. Using the findings of investigations carried out in a number of case study companies, they describe four generic models which have been identified as characterising methods of implementing total quality within existing organisation cultures. Boaden and Dale's paper also relates to the management of quality, but looks specifically at the construction industry where it has been found there is still some confusion over the role of Quality Assurance (QA) and Total Quality Management (TQM). They describe the results of a questionnaire survey of forty companies in the industry and compare them to similar work carried out in other industries. Szakonyi's contribution then completes this group of papers which all relate specifically to the question of quality. His concern is with the two ways in which R&D or engineering managers can work on improving quality. The first is by improving it in the laboratory, while the second is by working with other functions to improve quality in the company. The next group of papers in this issue all address aspects of production management. Umeda's paper proposes a new manufacturing-oriented simulation package for production management which provides important information for both design and operation of manufacturing systems. A simulation for production strategy in a Computer Integrated Manufacturing (CIM) environment is also discussed. This paper is then followed by a contribution by Tanaka and colleagues in which they consider loading schedules for manufacturing orders in a Material Requirements Planning (MRP) environment. They compare mathematical programming with a knowledge-based approach, and comment on their relative effectiveness for different practical situations. Engstrom and Medbo's paper then looks at a particular aspect of production system design, namely the question of devising group working arrangements for assembly with new product structures. Using the case of a Swedish vehicle assembly plant where long cycle assembly work has been adopted, they advocate the use of a generally applicable product structure which can be adapted to suit individual local conditions. In the last paper of this particular group, Tay considers how automation has affected the production efficiency in Singapore. Using data from ten major industries he identifies several factors which are positively correlated with efficiency, with capital intensity being of greatest interest to policy makers. The two following papers examine the case of electronic data interchange (EDI) as a means of improving the efficiency and quality of trading relationships. Banerjee and Banerjee consider a particular approach to material provisioning for production systems using orderless inventory replenishment. Using the example of a single supplier and multiple buyers they develop an analytical model which is applicable for the exchange of information between trading partners using EDI. They conclude that EDI-based inventory control can be attractive from economic as well as other standpoints and that the approach is consistent with and can be instrumental in moving towards just-in-time (JIT) inventory management. Slacker's complementary viewpoint on EDI is from the perspective of the quality relation-ship between the customer and supplier. Based on the experience of Lucas, a supplier within the automotive industry, he concludes that both banks and trading companies must take responsibility for the development of payment mechanisms which satisfy the requirements of quality trading. The three final papers of this issue relate to technological innovation and are all country based. Berman and Khalil report on a survey of US technological effectiveness in the global economy. The importance of education is supported in their conclusions, although it remains unclear to what extent the US government can play a wider role in promoting technological innovation and new industries. The role of technology in national development is taken up by Martinsons and Valdemars who examine the case of the former Soviet Union. The failure to successfully infuse technology into Soviet enterprises is seen as a factor in that country's demise, and it is anticipated that the newly liberalised economies will be able to encourage greater technological creativity. This point is then taken up in Perminov's concluding paper which looks in detail at Russia. Here a similar analysis is made of the concluding paper which looks in detail at Russia. Here a similar analysis is made of the Soviet Union's technological decline, but a development strategy is also presented within the context of the change from a centralised to a free market economy. The papers included in this special issue of the International Journal of Technology Management each represent a unique and particular contribution to their own specific area of concern. Together, however, they also argue or demonstrate the general improvements in competitive performance that can be achieved through the application of modern principles and practice to the management of quality, production and technological innovation.
Resumo:
This paper presents an assessment of the technical and economic performance of thermal processes to generate electricity from a wood chip feedstock by combustion, gasification and fast pyrolysis. The scope of the work begins with the delivery of a wood chip feedstock at a conversion plant and ends with the supply of electricity to the grid, incorporating wood chip preparation, thermal conversion, and electricity generation in dual fuel diesel engines. Net generating capacities of 1–20 MWe are evaluated. The techno-economic assessment is achieved through the development of a suite of models that are combined to give cost and performance data for the integrated system. The models include feed pretreatment, combustion, atmospheric and pressure gasification, fast pyrolysis with pyrolysis liquid storage and transport (an optional step in de-coupled systems) and diesel engine or turbine power generation. The models calculate system efficiencies, capital costs and production costs. An identical methodology is applied in the development of all the models so that all of the results are directly comparable. The electricity production costs have been calculated for 10th plant systems, indicating the costs that are achievable in the medium term after the high initial costs associated with novel technologies have reduced. The costs converge at the larger scale with the mean electricity price paid in the EU by a large consumer, and there is therefore potential for fast pyrolysis and diesel engine systems to sell electricity directly to large consumers or for on-site generation. However, competition will be fierce at all capacities since electricity production costs vary only slightly between the four biomass to electricity systems that are evaluated. Systems de-coupling is one way that the fast pyrolysis and diesel engine system can distinguish itself from the other conversion technologies. Evaluations in this work show that situations requiring several remote generators are much better served by a large fast pyrolysis plant that supplies fuel to de-coupled diesel engines than by constructing an entire close-coupled system at each generating site. Another advantage of de-coupling is that the fast pyrolysis conversion step and the diesel engine generation step can operate independently, with intermediate storage of the fast pyrolysis liquid fuel, increasing overall reliability. Peak load or seasonal power requirements would also benefit from de-coupling since a small fast pyrolysis plant could operate continuously to produce fuel that is stored for use in the engine on demand. Current electricity production costs for a fast pyrolysis and diesel engine system are 0.091/kWh at 1 MWe when learning effects are included. These systems are handicapped by the typical characteristics of a novel technology: high capital cost, high labour, and low reliability. As such the more established combustion and steam cycle produces lower cost electricity under current conditions. The fast pyrolysis and diesel engine system is a low capital cost option but it also suffers from relatively low system efficiency particularly at high capacities. This low efficiency is the result of a low conversion efficiency of feed energy into the pyrolysis liquid, because of the energy in the char by-product. A sensitivity analysis has highlighted the high impact on electricity production costs of the fast pyrolysis liquids yield. The liquids yield should be set realistically during design, and it should be maintained in practice by careful attention to plant operation and feed quality. Another problem is the high power consumption during feedstock grinding. Efficiencies may be enhanced in ablative fast pyrolysis which can tolerate a chipped feedstock. This has yet to be demonstrated at commercial scale. In summary, the fast pyrolysis and diesel engine system has great potential to generate electricity at a profit in the long term, and at a lower cost than any other biomass to electricity system at small scale. This future viability can only be achieved through the construction of early plant that could, in the short term, be more expensive than the combustion alternative. Profitability in the short term can best be achieved by exploiting niches in the market place and specific features of fast pyrolysis. These include: •countries or regions with fiscal incentives for renewable energy such as premium electricity prices or capital grants; •locations with high electricity prices so that electricity can be sold direct to large consumers or generated on-site by companies who wish to reduce their consumption from the grid; •waste disposal opportunities where feedstocks can attract a gate fee rather than incur a cost; •the ability to store fast pyrolysis liquids as a buffer against shutdowns or as a fuel for peak-load generating plant; •de-coupling opportunities where a large, single pyrolysis plant supplies fuel to several small and remote generators; •small-scale combined heat and power opportunities; •sales of the excess char, although a market has yet to be established for this by-product; and •potential co-production of speciality chemicals and fuel for power generation in fast pyrolysis systems.
Resumo:
This study investigates the use of Pyroformer intermediate pyrolysis system to produce alternative diesel engines fuels (pyrolysis oil) from various biomass and waste feedstocks and the application of these pyrolysis oils in a diesel engine generating system for Combined Heat and Power (CHP) production. The pyrolysis oils were produced in a pilot-scale (20 kg/h) intermediate pyrolysis system. Comprehensive characterisations, with a view to use as engine fuels, were carried out on the sewage sludge and de-inking sludge derived pyrolysis oils. They were both found to be able to provide sufficient heat for fuelling a diesel engine. The pyrolysis oils also presented poor combustibility and high carbon deposition, but these problems could be mitigated by means of blending the pyrolysis oils with biodiesel (derived from waste cooking oil). The blends of SSPO (sewage sludge pyrolysis oil) and biodiesel (30/70 and 50/50 in volumetric ratios) were tested in a 15 kWe Lister type stationary generating system for up to 10 hours. There was no apparent deterioration observed in engine operation. With 30% SSPO blended into biodiesel, the engine presents better overall performance (electric efficiency), fuel consumption, and overall exhaust emissions than with 50% SSPO blend. An overall system analysis was carried out on a proposed integrated Pyroformer-CHP system. Combined with real experimental results, this was used for evaluating the costs for producing heat and power and char from wood pellets and sewage sludge. It is concluded that the overall system efficiencies for both types of plant can be over 40%; however the integrated CHP system is not economically viable. This is due to extraordinary project capital investment required.
Resumo:
This paper describes how dimensional variation management could be integrated throughout design, manufacture and verification, to improve quality while reducing cycle times and manufacturing cost in the Digital Factory environment. Initially variation analysis is used to optimize tolerances during product and tooling design and also results in the creation of a simplified representation of product key characteristics. This simplified representation can then be used to carry out measurability analysis and process simulation. The link established between the variation analysis model and measurement processes can subsequently be used throughout the production process to automatically update the variation analysis model in real time with measurement data. This ‘live’ simulation of variation during manufacture will allow early detection of quality issues and facilitate autonomous measurement assisted processes such as predictive shimming. A study is described showing how these principles can be demonstrated using commercially available software combined with a number of prototype applications operating as discrete modules. The commercially available modules include Catia/Delmia for product and process design, 3DCS for variation analysis and Spatial Analyzer for measurement simulation. Prototype modules are used to carry out measurability analysis and instrument selection. Realizing the full potential of Metrology in the Digital Factory will require that these modules are integrated and software architecture to facilitate this is described. Crucially this integration must facilitate the use of realtime metrology data describing the emerging assembly to update the digital model.
Resumo:
Tool life is an important factor to be considered during the optimisation of a machining process since cutting parameters can be adjusted to optimise tool changing, reducing cost and time of production. Also the performance of a tool is directly linked to the generated surface roughness and this is important in cases where there are strict surface quality requirements. The prediction of tool life and the resulting surface roughness in milling operations has attracted considerable research efforts. The research reported herein is focused on defining the influence of milling cutting parameters such as cutting speed, feed rate and axial depth of cut, on three major tool performance parameters namely, tool life, material removal and surface roughness. The research is seeking to define methods that will allow the selection of optimal parameters for best tool performance when face milling 416 stainless steel bars. For this study the Taguchi method was applied in a special design of an orthogonal array that allows studying the entire parameter space with only a number of experiments representing savings in cost and time of experiments. The findings were that the cutting speed has the most influence on tool life and surface roughness and very limited influence on material removal. By last tool life can be judged either from tool life or volume of material removal.
Resumo:
Discrepancies of materials, tools, and factory environments, as well as human intervention, make variation an integral part of the manufacturing process of any component. In particular, the assembly of large volume, aerospace parts is an area where significant levels of form and dimensional variation are encountered. Corrective actions can usually be taken to reduce the defects, when the sources and levels of variation are known. For the unknown dimensional and form variations, a tolerancing strategy is typically put in place in order to minimize the effects of production inconsistencies related to geometric dimensions. This generates a challenging problem for the automation of the corresponding manufacturing and assembly processes. Metrology is becoming a major contributor to being able to predict, in real time, the automated assembly problems related to the dimensional variation of parts and assemblies. This is done by continuously measuring dimensions and coordinate points, focusing on the product's key characteristics. In this paper, a number of metrology focused activities for large-volume aerospace products, including their implementation and application in the automation of manufacturing and assembly processes, are reviewed. This is done by using a case study approach within the assembly of large-volume aircraft wing structures.
Resumo:
A tanulmány középpontjában a szolgálatosodás folyamata, vagy más néven az átfogó megoldásokat kínáló integrált termék-szolgáltatás rendszerek kialakulása áll. Áttekintjük a szolgálatosodás XIX. századra visszanyúló kialakulásának tényezőit, és a jelenlegi vállalatok előtt álló fejlődési lehetőségeket. Foglalkozunk e rendszerekhez szükséges képességek kérdéseivel és a sikeres termék-szolgáltatás rendszerek kialakításának folyamataival. Az irodalmi összefoglalás célja, hogy a vállalati üzletfejlesztéssel foglalkozó szakembereknek, a vállalati vezetőknek ötleteket adjon a sikeres fejlődéshez és egyben a lehetséges kockázatok elkerüléséhez. = The emerging theme of servitization, or in other words, the integrated product-service systems providing complex solutions to customer demand are in the focus of this study. We overview the factors leading to servitization, and highlight the improvement opportunities in this field. The capabilities required and the development steps of successful servitization are also addressed. The objective of this short literature review is to provide ideas for business development experts and top managers on how to develop their business successfully and how to avoid risks in this development.
Resumo:
An integrated production–recycling system is investigated. A constant demand can be satisfied by production and recycling. The used items might be bought back and then recycled. The not recycled products are disposed off. Two types of models are analyzed. The first model examines and minimizes the EOQ related cost. The second model generalizes the first one by introducing additionally linear waste disposal, recycling, production and buyback costs. This basic model was examined by the authors in a previous paper. The main results are that a pure strategy (either production or recycling) is optimal. This paper extends the model for the case of quality consideration: it is asked for the quality of the bought back products. In the former model we have assumed that all returned items are serviceable. One can put the following question: Who should control the quality of the returned items? If the suppliers examine the quality of the reusable products, then the buyback rate is strongly smaller than one, α<1. If the user does it, then not all returned items are recyclable, i.e. the use rate is smaller than one, δ<1. Which one of the control systems are more cost advantageous in this case?
Resumo:
Recent studies suggest that coastal ecosystems can bury significantly more C than tropical forests, indicating that continued coastal development and exposure to sea level rise and storms will have global biogeochemical consequences. The Florida Coastal Everglades Long Term Ecological Research (FCE LTER) site provides an excellent subtropical system for examining carbon (C) balance because of its exposure to historical changes in freshwater distribution and sea level rise and its history of significant long-term carbon-cycling studies. FCE LTER scientists used net ecosystem C balance and net ecosystem exchange data to estimate C budgets for riverine mangrove, freshwater marsh, and seagrass meadows, providing insights into the magnitude of C accumulation and lateral aquatic C transport. Rates of net C production in the riverine mangrove forest exceeded those reported for many tropical systems, including terrestrial forests, but there are considerable uncertainties around those estimates due to the high potential for gain and loss of C through aquatic fluxes. C production was approximately balanced between gain and loss in Everglades marshes; however, the contribution of periphyton increases uncertainty in these estimates. Moreover, while the approaches used for these initial estimates were informative, a resolved approach for addressing areas of uncertainty is critically needed for coastal wetland ecosystems. Once resolved, these C balance estimates, in conjunction with an understanding of drivers and key ecosystem feedbacks, can inform cross-system studies of ecosystem response to long-term changes in climate, hydrologic management, and other land use along coastlines.
Resumo:
Anthropogenic carbon dioxide (CO2) emissions are reducing the pH in the world's oceans. The plankton community is a key component driving biogeochemical fluxes, and the effect of increased CO2 on plankton is critical for understanding the ramifications of ocean acidification on global carbon fluxes. We determined the plankton community composition and measured primary production, respiration rates and carbon export (defined here as carbon sinking out of a shallow, coastal area) during an ocean acidification experiment. Mesocosms (~ 55 m3) were set up in the Baltic Sea with a gradient of CO2 levels initially ranging from ambient (~ 240 µatm), used as control, to high CO2 (up to ~ 1330 µatm). The phytoplankton community was dominated by dinoflagellates, diatoms, cyanobacteria and chlorophytes, and the zooplankton community by protozoans, heterotrophic dinoflagellates and cladocerans. The plankton community composition was relatively homogenous between treatments. Community respiration rates were lower at high CO2 levels. The carbon-normalized respiration was approximately 40 % lower in the high CO2 environment compared with the controls during the latter phase of the experiment. We did not, however, detect any effect of increased CO2 on primary production. This could be due to measurement uncertainty, as the measured total particular carbon (TPC) and combined results presented in this special issue suggest that the reduced respiration rate translated into higher net carbon fixation. The percent carbon derived from microscopy counts (both phyto- and zooplankton), of the measured total particular carbon (TPC) decreased from ~ 26 % at t0 to ~ 8 % at t31, probably driven by a shift towards smaller plankton (< 4 µm) not enumerated by microscopy. Our results suggest that reduced respiration lead to increased net carbon fixation at high CO2. However, the increased primary production did not translate into increased carbon export, and did consequently not work as a negative feedback mechanism for increasing atmospheric CO2 concentration.
Resumo:
The human-induced rise in atmospheric carbon dioxide since the industrial revolution has led to increasing oceanic carbon uptake and changes in seawater carbonate chemistry, resulting in lowering of surface water pH. In this study we investigated the effect of increasing CO2 partial pressure (pCO2) on concentrations of volatile biogenic dimethylsulfide (DMS) and its precursor dimethylsulfoniopropionate (DMSP), through monoculture studies and community pCO2 perturbation. DMS is a climatically important gas produced by many marine algae: it transfers sulfur into the atmosphere and is a major influence on biogeochemical climate regulation through breakdown to sulfate and formation of subsequent cloud condensation nuclei (CCN). Overall, production of DMS and DMSP by the coccolithophore Emiliania huxleyi strain RCC1229 was unaffected by growth at 900 µatm pCO2, but DMSP production normalised to cell volume was 12 % lower at the higher pCO2 treatment. These cultures were compared with community DMS and DMSP production during an elevated pCO2 mesocosm experiment with the aim of studying E. huxleyi in the natural environment. Results contrasted with the culture experiments and showed reductions in community DMS and DMSP concentrations of up to 60 and 32 % respectively at pCO2 up to 3000 µatm, with changes attributed to poorer growth of DMSP-producing nanophytoplankton species, including E. huxleyi, and potentially increased microbial consumption of DMS and dissolved DMSP at higher pCO2. DMS and DMSP production differences between culture and community likely arise from pH affecting the inter-species responses between microbial producers and consumers.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.