928 resultados para cost-aware process design
Resumo:
Purpose – To investigate the impact of performance measurement in strategic planning process. Design/methodology/approach – A large scale survey was conducted online with Warwick Business School alumni. The questionnaire was based on the Strategic Development Process model by Dyson. The questionnaire was designed to map the current practice of strategic planning and to determine its most influential factors on the effectiveness of the process. All questions were close ended and a seven-point Likert scale used. The independent variables were grouped into four meaningful factors by factor analysis (Varimax, coefficient of rotation 0.4). The factors produced were used to build regression models (stepwise) for the five assessments of strategic planning process. Regression models were developed for the totality of the responses, comparing SMEs and large organizations and comparing organizations operating in slowly and rapidly changing environments. Findings – The results indicate that performance measurement stands as one of the four main factors characterising the current practice of strategic planning. This research has determined that complexity coming from organizational size and rate of change in the sector creates variation in the impact of performance measurement in strategic planning. Large organizations and organizations operating in rapidly changing environments make greater use of performance measurement. Research limitations/implications – This research is based on subjective data, therefore the conclusions do not concern the impact of strategic planning process' elements on the organizational performance achievements, but on the success/effectiveness of the strategic planning process itself. Practical implications – This research raises a series of questions about the use and potential impact of performance measurement, especially in the categories of organizations that are not significantly influenced by its utilisation. It contributes to the field of performance measurement impact. Originality/value – This research fills in the gap literature concerning the lack of large scale surveys on strategic development processes and performance measurement. It also contributes in the literature of this field by providing empirical evidences on the impact of performance measurement upon the strategic planning process.
Resumo:
Various micro-radial compressor configurations were investigated using one-dimensional meanline and computational fluid dynamics (CFD) techniques for use in a micro gas turbine (MGT) domestic combined heat and power (DCHP) application. Blade backsweep, shaft speed, and blade height were varied at a constant pressure ratio. Shaft speeds were limited to 220 000 r/min, to enable the use of a turbocharger bearing platform. Off-design compressor performance was established and used to determine the MGT performance envelope; this in turn was used to assess potential cost and environmental savings in a heat-led DCHP operating scenario within the target market of a detached family home. A low target-stage pressure ratio provided an opportunity to reduce diffusion within the impeller. Critically for DCHP, this produced very regular flow, which improved impeller performance for a wider operating envelope. The best performing impeller was a low-speed, 170 000 r/min, low-backsweep, 15° configuration producing 71.76 per cent stage efficiency at a pressure ratio of 2.20. This produced an MGT design point system efficiency of 14.85 per cent at 993 W, matching prime movers in the latest commercial DCHP units. Cost and CO2 savings were 10.7 per cent and 6.3 per cent, respectively, for annual power demands of 17.4 MWht and 6.1 MWhe compared to a standard condensing boiler (with grid) installation. The maximum cost saving (on design point) was 14.2 per cent for annual power demands of 22.62 MWht and 6.1 MWhe corresponding to an 8.1 per cent CO2 saving. When sizing, maximum savings were found with larger heat demands. When sized, maximum savings could be made by encouraging more electricity export either by reducing household electricity consumption or by increasing machine efficiency.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
Introduction: Production of functionalised particles using dry powder coating is a one-step, environmentally friendly process that paves the way for the development of particles with targeted properties and diverse functionalities. Areas covered: Applying the first principles in physical science for powders, fine guest particles can be homogeneously dispersed over the surface of larger host particles to develop functionalised particles. Multiple functionalities can be modified including: flowability, dispersibility, fluidisation, homogeneity, content uniformity and dissolution profile. The current publication seeks to understand the fundamental underpinning principles and science governing dry coating process, evaluate key technologies developed to produce functionalised particles along with outlining their advantages, limitations and applications and discusses in detail the resultant functionalities and their applications. Expert opinion: Dry particle coating is a promising solvent-free manufacturing technology to produce particles with targeted functionalities. Progress within this area requires the development of continuous processing devices that can overcome challenges encountered with current technologies such as heat generation and particle attrition. Growth within this field requires extensive research to further understand the impact of process design and material properties on resultant functionalities.
Resumo:
The conventional, geometrically lumped description of the physical processes inside a high shear granulator is not reliable for process design and scale-up. In this study, a compartmental Population Balance Model (PBM) with spatial dependence is developed and validated in two lab-scale high shear granulation processes using a 1.9L MiPro granulator and 4L DIOSNA granulator. The compartmental structure is built using a heuristic approach based on computational fluid dynamics (CFD) analysis, which includes the overall flow pattern, velocity and solids concentration. The constant volume Monte Carlo approach is implemented to solve the multi-compartment population balance equations. Different spatial dependent mechanisms are included in the compartmental PBM to describe granule growth. It is concluded that for both cases (low and high liquid content), the adjustment of parameters (e.g. layering, coalescence and breakage rate) can provide a quantitative prediction of the granulation process.
Resumo:
In Marxist frameworks “distributive justice” depends on extracting value through a centralized state. Many new social movements—peer to peer economy, maker activism, community agriculture, queer ecology, etc.—take the opposite approach, keeping value in its unalienated form and allowing it to freely circulate from the bottom up. Unlike Marxism, there is no general theory for bottom-up, unalienated value circulation. This paper examines the concept of “generative justice” through an historical contrast between Marx’s writings and the indigenous cultures that he drew upon. Marx erroneously concluded that while indigenous cultures had unalienated forms of production, only centralized value extraction could allow the productivity needed for a high quality of life. To the contrary, indigenous cultures now provide a robust model for the “gift economy” that underpins open source technological production, agroecology, and restorative approaches to civil rights. Expanding Marx’s concept of unalienated labor value to include unalienated ecological (nonhuman) value, as well as the domain of freedom in speech, sexual orientation, spirituality and other forms of “expressive” value, we arrive at an historically informed perspective for generative justice.
Resumo:
This work addresses the problem of detecting human behavioural anomalies in crowded surveillance environments. We focus in particular on the problem of detecting subtle anomalies in a behaviourally heterogeneous surveillance scene. To reach this goal we implement a novel unsupervised context-aware process. We propose and evaluate a method of utilising social context and scene context to improve behaviour analysis. We find that in a crowded scene the application of Mutual Information based social context permits the ability to prevent self-justifying groups and propagate anomalies in a social network, granting a greater anomaly detection capability. Scene context uniformly improves the detection of anomalies in both datasets. The strength of our contextual features is demonstrated by the detection of subtly abnormal behaviours, which otherwise remain indistinguishable from normal behaviour.
Resumo:
The mixing performance of three passive milli-scale reactors with different geometries was investigated at different Reynolds numbers. The effects of design and operating characteristics such as mixing channel shape and volume flow rate were investigated. The main objective of this work was to demonstrate a process design method that uses on Computational Fluid Dynamics (CFD) for modeling and Additive Manufacturing (AM) technology for manufacture. The reactors were designed and simulated using SolidWorks and Fluent 15.0 software, respectively. Manufacturing of the devices was performed with an EOS M-series AM system. Step response experiments with distilled Millipore water and sodium hydroxide solution provided time-dependent concentration profiles. Villermaux-Dushman reaction experiments were also conducted for additional verification of CFD results and for mixing efficiency evaluation of the different geometries. Time-dependent concentration data and reaction evaluation showed that the performance of the AM-manufactured reactors matched the CFD results reasonably well. The proposed design method allows the implementation of new and innovative solutions, especially in the process design phase, for industrial scale reactor technologies. In addition, rapid implementation is another advantage due to the virtual flow design and due to the fast manufacturing which uses the same geometric file formats.
Resumo:
Designing for users rather than with users is still a common practice in technology design and innovation as opposed to taking them on board in the process. Design for inclusion aims to define and understand end-users, their needs, context of use, and, by doing so, ensure that end-users are catered for and included, while the results are geared towards universality of use. We describe the central role of end-user and designer participation, immersion and perspective to build user-driven solutions. These approaches provided a critical understanding of the counterpart role. Designer(s) could understand what the user’s needs were, experience physical impairments, and see from other’s perspective the interaction with the environment. Users could understand challenges of designing for physical impairments, build a sense of ownership with technology and explore it from a creative perspective. The understanding of the peer’s role (user and designer), needs and perspective enhanced user participation and inclusion.
Resumo:
Mass Customization (MC) is not a mature business strategy and hence it is not clear that a single or small group of operational models are dominating. Companies tend to approach MC from either a mass production or a customization origin and this in itself gives reason to believe that several operational models will be observable. This paper reviews actual and theoretical fulfilment systems that enterprises could apply when offering a pre-engineered catalogue of customizable products and options. Issues considered are: How product flows are structured in relation to processes, inventories and decoupling point(s); - Characteristics of the OF process that inhibit or facilitate fulfilment; - The logic of how products are allocated to customers; - Customer factors that influence OF process design and operation. Diversity in the order fulfilment structures is expected and is found in the literature. The review has identified four structural forms that have been used in a Catalogue MC context: - fulfilment from stock; - fulfilment from a single fixed decoupling point; - fulfilment from one of several fixed decoupling points; - fulfilment from several locations, with floating decoupling points. From the review it is apparent that producers are being imaginative in coping with the demands of high variety, high volume, customization and short lead times. These demands have encouraged the relationship between product, process and customer to be re-examined. Not only has this strengthened interest in commonality and postponement, but, as is reported in the paper, has led to the re-engineering of the order fulfilment process to create models with multiple fixed decoupling points and the floating decoupling point system
Resumo:
The knowledge of the liquid-liquid equilibria (LLE) between ionic liquids (ILs) and water is of utmost importance for environmental monitoring, process design and optimization. Therefore, in this work, the mutual solubilities with water, for the ILs combining the 1-methylimidazolium, [C(1)im](+); 1-ethylimidazolium, [C(2)im](+); 1-ethyl-3-propylimidazolium, [C(2)C(3)im](+); and 1-butyl-2,3-dimethylimidazolium, [C(4)C(1)C(1)im](+) cations with the bis(trifluoromethylsulfonyl)imide anion, were determined and compared with the isomers of the symmetric 1,3-dialkylimidazolium bis(trifluoromethylsulfonyl)imide ([C(n)C(n)im][NTf2], with n=1-3) and of the asymmetric 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([C(n)C(1)im][NTf2], with n = 2-5) series of ILs. The results obtained provide a broad picture of the impact of the IL cation structural isomerism, including the number of alkyl side chains at the cation, on the water-IL mutual solubilities. Despite the hydrophobic behaviour associated to the [NTf2](-) anion, the results show a significant solubility of water in the IL-rich phase, while the solubility of ILs in the water-rich phase is much lower. The thermodynamic properties of solution indicate that the solubility of ILs in water is entropically driven and highly influenced by the cation size. Using the results obtained here in addition to literature data, a correlation between the solubility of [NTf2]-based ILs in water and their molar volume, for a large range of cations, is proposed. The COnductor like Screening MOdel for Real Solvents (COSMO-RS) was also used to estimate the LLE of the investigated systems and proved to be a useful predictive tool for the a priori screening of ILs aiming at finding suitable candidates before extensive experimental measurements.
Resumo:
The project goal was to determine plant operations and maintenance worker’s level of exposure to mercury during routine and non-routine (i.e. turnarounds and inspections) maintenance events in eight gas processing plants. The project team prepared sampling and analysis plans designed to each plant’s process design and scheduled maintenance events. Occupational exposure sampling and monitoring efforts were focused on the measurement of mercury vapor concentration in worker breathing zone air during specific maintenance events including: pipe scrapping, process filter replacement, and process vessel inspection. Similar exposure groups were identified and worker breathing zone and ambient air samples were collected and analyzed for total mercury. Occupational exposure measurement techniques included portable field monitoring instruments, standard passive and active monitoring methods and an emerging passive absorption technology. Process sampling campaigns were focused on inlet gas streams, mercury removal unit outlets, treated gas, acid gas and sales gas. The results were used to identify process areas with increased potential for mercury exposure during maintenance events. Sampling methods used for the determination of total mercury in gas phase streams were based on the USEPA Methods 30B and EPA 1631 and EPA 1669. The results of four six-week long sampling campaigns have been evaluated and some conclusions and recommendations have been made. The author’s role in this project included the direction of all field phases of the project and the development and implementation of the sampling strategy. Additionally, the author participated in the development and implementation of the Quality Assurance Project Plan, Data Quality Objectives, and Similar Exposure Groups identification. All field generated data was reviewed by the author along with laboratory reports in order to generate conclusions and recommendations.
Resumo:
Companies operating in the wood processing industry need to increase their productivity by implementing automation technologies in their production systems. An increasing global competition and rising raw material prizes challenge their competitiveness. Yet, too extensive automation brings risks such as a deterioration in situation awareness and operator deskilling. The concept of Levels of Automation is generally seen as means to achieve a balanced task allocation between the operators’ skills and competences and the need for automation technology relieving the humans from repetitive or hazardous work activities. The aim of this thesis was to examine to what extent existing methods for assessing Levels of Automation in production processes are applicable in the wood processing industry when focusing on an improved competitiveness of production systems. This was done by answering the following research questions (RQ): RQ1: What method is most appropriate to be applied with measuring Levels of Automation in the wood processing industry? RQ2: How can the measurement of Levels of Automation contribute to an improved competitiveness of the wood processing industry’s production processes? Literature reviews were used to identify the main characteristics of the wood processing industry affecting its automation potential and appropriate assessment methods for Levels of Automation in order to answer RQ1. When selecting the most suitable method, factors like the relevance to the target industry, application complexity or operational level the method is penetrating were important. The DYNAMO++ method, which covers both a rather quantitative technical-physical and a more qualitative social-cognitive dimension, was seen as most appropriate when taking into account these factors. To answer RQ 2, a case study was undertaken at a major Swedish manufacturer of interior wood products to point out paths how the measurement of Levels of Automation contributes to an improved competitiveness of the wood processing industry. The focus was on the task level on shop floor and concrete improvement suggestions were elaborated after applying the measurement method for Levels of Automation. Main aspects considered for generalization were enhancements regarding ergonomics in process design and cognitive support tools for shop-floor personnel through task standardization. Furthermore, difficulties regarding the automation of grading and sorting processes due to the heterogeneous material properties of wood argue for a suitable arrangement of human intervention options in terms of work task allocation. The application of a modified version of DYNAMO++ reveals its pros and cons during a case study which covers a high operator involvement in the improvement process and the distinct predisposition of DYNAMO++ to be applied in an assembly system.
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.
Resumo:
The purpose of this thesis is to analyse activity-based costing (ABC) and possible modified versions ofit in engineering design context. The design engineers need cost information attheir decision-making level and the cost information should also have a strong future orientation. These demands are high because traditional management accounting has concentrated on the direct actual costs of the products. However, cost accounting has progressed as ABC was introduced late 1980s and adopted widely bycompanies in the 1990s. The ABC has been a success, but it has gained also criticism. In some cases the ambitious ABC systems have become too complex to build,use and update. This study can be called an action-oriented case study with some normative features. In this thesis theoretical concepts are assessed and allowed to unfold gradually through interaction with data from three cases. The theoretical starting points are ABC and theory of engineering design process (chapter2). Concepts and research results from these theoretical approaches are summarized in two hypotheses (chapter 2.3). The hypotheses are analysed with two cases (chapter 3). After the two case analyses, the ABC part is extended to cover alsoother modern cost accounting methods, e.g. process costing and feature costing (chapter 4.1). The ideas from this second theoretical part are operationalized with the third case (chapter 4.2). The knowledge from the theory and three cases is summarized in the created framework (chapter 4.3). With the created frameworkit is possible to analyse ABC and its modifications in the engineering design context. The framework collects the factors that guide the choice of the costing method to be used in engineering design. It also illuminates the contents of various ABC-related costing methods. However, the framework needs to be further tested. On the basis of the three cases it can be said that ABC should be used cautiously when formulating cost information for engineering design. It is suitable when the manufacturing can be considered simple, or when the design engineers are not cost conscious, and in the beginning of the design process when doing adaptive or variant design. If the design engineers need cost information for the embodiment or detailed design, or if manufacturing can be considered complex, or when design engineers are cost conscious, the ABC has to be always evaluated critically.