379 resultados para agency of technology


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The large and growing number of digital images is making manual image search laborious. Only a fraction of the images contain metadata that can be used to search for a particular type of image. Thus, the main research question of this thesis is whether it is possible to learn visual object categories directly from images. Computers process images as long lists of pixels that do not have a clear connection to high-level semantics which could be used in the image search. There are various methods introduced in the literature to extract low-level image features and also approaches to connect these low-level features with high-level semantics. One of these approaches is called Bag-of-Features which is studied in the thesis. In the Bag-of-Features approach, the images are described using a visual codebook. The codebook is built from the descriptions of the image patches using clustering. The images are described by matching descriptions of image patches with the visual codebook and computing the number of matches for each code. In this thesis, unsupervised visual object categorisation using the Bag-of-Features approach is studied. The goal is to find groups of similar images, e.g., images that contain an object from the same category. The standard Bag-of-Features approach is improved by using spatial information and visual saliency. It was found that the performance of the visual object categorisation can be improved by using spatial information of local features to verify the matches. However, this process is computationally heavy, and thus, the number of images must be limited in the spatial matching, for example, by using the Bag-of-Features method as in this study. Different approaches for saliency detection are studied and a new method based on the Hessian-Affine local feature detector is proposed. The new method achieves comparable results with current state-of-the-art. The visual object categorisation performance was improved by using foreground segmentation based on saliency information, especially when the background could be considered as clutter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Direct-driven permanent magnet synchronous generator is one of the most promising topologies for megawatt-range wind power applications. The rotational speed of the direct-driven generator is very low compared with the traditional electrical machines. The low rotational speed requires high torque to produce megawatt-range power. The special features of the direct-driven generators caused by the low speed and high torque are discussed in this doctoral thesis. Low speed and high torque set high demands on the torque quality. The cogging torque and the load torque ripple must be as low as possible to prevent mechanical failures. In this doctoral thesis, various methods to improve the torque quality are compared with each other. The rotor surface shaping, magnet skew, magnet shaping, and the asymmetrical placement of magnets and stator slots are studied not only by means of torque quality, but also the effects on the electromagnetic performance and manufacturability of the machine are discussed. The heat transfer of the direct-driven generator must be designed to handle the copper losses of the stator winding carrying high current density and to keep the temperature of the magnets low enough. The cooling system of the direct-driven generator applying the doubly radial air cooling with numerous radial cooling ducts was modeled with a lumped-parameter-based thermal network. The performance of the cooling system was discussed during the steady and transient states. The effect of the number and width of radial cooling ducts was explored. The large number of radial cooling ducts drastically increases the impact of the stack end area effects, because the stator stack consists of numerous substacks. The effects of the radial cooling ducts on the effective axial length of the machine were studied by analyzing the crosssection of the machine in the axial direction. The method to compensate the magnet end area leakage was considered. The effect of the cooling ducts and the stack end area effects on the no-load voltages and inductances of the machine were explored by using numerical analysis tools based on the three-dimensional finite element method. The electrical efficiency of the permanent magnet machine with different control methods was estimated analytically over the whole speed and torque range. The electrical efficiencies achieved with the most common control methods were compared with each other. The stator voltage increase caused by the armature reaction was analyzed. The effect of inductance saturation as a function of load current was implemented to the analytical efficiency calculation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is devoted to growth and investigations of Mn-doped InSb and II-IV-As2 semiconductors, including Cd1-xZnxGeAs2:Mn, ZnSiAs2:Mn bulk crystals, ZnSiAs2:Mn/Si heterostructures. Bulk crystals were grown by direct melting of starting components followed by fast cooling. Mn-doped ZnSiAs2/Si heterostructures were grown by vacuum-thermal deposition of ZnAs2 and Mn layers on Si substrates followed by annealing. The compositional and structural properties of samples were investigated by different methods. The samples consist of micro- and nano- sizes clusters of an additional ferromagnetic Mn-X phases (X = Sb or As). Influence of magnetic precipitations on magnetic and electrical properties of the investigated materials was examined. With relatively high Mn concentration the main contribution to magnetization of samples is by MnSb or MnAs clusters. These clusters are responsible for high temperature behavior of magnetization and relatively high Curie temperature: up to 350 K for Mn-doped II-IV-As2 and about 600 K for InMnSb. The low-field magnetic properties of Mn-doped II-IV-As2 semiconductors and ZnSiAs2:Mn/Si heterostructures are connected to the nanosize MnAs particles. Also influence of nanosized MnSb clusters on low-field magnetic properties of InMnSb have been observed. The contribution of paramagnetic phase to magnetization rises at low temperatures or in samples with low Mn concentration. Source of this contribution is not only isolated Mn ions, but also small complexes, mainly dimmers and trimmers formed by Mn ions, substituting cation positions in crystal lattice. Resistivity, magnetoresistance and Hall resistivity properties in bulk Mn-doped II-IV-As2 and InSb crystals was analyzed. The interaction between delocalized holes and 3d shells of the Mn ions together with giant Zeeman splitting near the cluster interface are respond for negative magnetoresistance. Additionally to high temperature critical pointthe low-temperature ferromagnetic transition was observed Anomalous Hall effect was observed in Mn doped samples and analyzed for InMnSb. It was found that MnX clusters influence significantly on magnetic scattering of carriers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The search for new renewable materials has intensified in recent years. Pulp and paper mill process streams contain a number of potential compounds which could be used in biofuel production and as raw materials in the chemical, food and pharmaceutical industries. Prior to utilization, these compounds require separation from other compounds present in the process stream. One feasible separation technique is membrane filtration but to some extent, fouling still limits its implementation in pulp and paper mill applications. To mitigate fouling and its effects, foulants and their fouling mechanisms need to be well understood. This thesis evaluates fouling in filtration of pulp and paper mill process streams by means of polysaccharide model substance filtrations and by development of a procedure to analyze and identify potential foulants, i.e. wood extractives and carbohydrates, from fouled membranes. The model solution filtration results demonstrate that each polysaccharide has its own fouling mechanism, which also depends on the membrane characteristics. Polysaccharides may foul the membranes by adsorption and/or by gel/cake layer formation on the membrane surface. Moreover, the polysaccharides interact, which makes fouling evaluation of certain compound groups very challenging. Novel methods to identify wood extractive and polysaccharide foulants are developed in this thesis. The results show that it is possible to extract and identify wood extractives from membranes fouled in filtration of pulp and paper millstreams. The most effective solvent was found to be acetone:water (9:1 v/v) because it extracted both lipophilic extractives and lignans at high amounts from the fouled membranes and it was also non-destructive for the membrane materials. One hour of extraction was enough to extract wood extractives at high amounts for membrane samples with an area of 0.008 m2. If only qualitative knowledge of wood extractives is needed a simplified extraction procedure can be used. Adsorption was the main fouling mechanism in extractives-induced fouling and dissolved fatty and resin acids were mostly the reason for the fouling; colloidal fouling was negligible. Both process water and membrane characteristics affected extractives-induced fouling. In general, the more hydrophilic regenerated cellulose (RC) membrane fouled less that the more hydrophobic polyethersulfone (PES) and polyamide (PA) membranes independent of the process water used. Monosaccharide and uronic acid units could also be identified from the fouled synthetic polymeric membranes. It was impossible to analyze all monosaccharide units from the RC membrane because the analysis result obtained contained degraded membrane material. One of the fouling mechanisms of carbohydrates was adsorption. Carbohydrates were not potential adsorptive foulants to the sameextent as wood extractives because their amount in the fouled membranes was found to be significantly lower than the amount of wood extractives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Delays in the justice system have been undermining the functioning and performance of the court system all over the world for decades. Despite the widespread concern about delays, the solutions have not kept up with the growth of the problem. The delay problem existing in the justice courts processes is a good example of the growing need and pressure in professional public organizations to start improving their business process performance.This study analyses the possibilities and challenges of process improvement in professional public organizations. The study is based on experiences gained in two longitudinal action research improvement projects conducted in two separate Finnish law instances; in the Helsinki Court of Appeal and in the Insurance Court. The thesis has two objectives. First objective is to study what kinds of factors in court system operations cause delays and unmanageable backlogs and how to reduce and prevent delays. Based on the lessons learned from the case projects the objective is to give new insights on the critical factors of process improvement conducted in professional public organizations. Four main areas and factors behind the delay problem is identified: 1) goal setting and performance measurement practices, 2) the process control system, 3) production and capacity planning procedures, and 4) process roles and responsibilities. The appropriate improvement solutions include tools to enhance project planning and scheduling and monitoring the agreed time-frames for different phases of the handling process and pending inventory. The study introduces the identified critical factors in different phases of process improvement work carried out in professional public organizations, the ways the critical factors can be incorporated to the different stages of the projects, and discusses the role of external facilitator in assisting process improvement work and in enhancing ownership towards the solutions and improvement. The study highlights the need to concentrate on the critical factors aiming to get the employees to challenge their existing ways of conducting work, analyze their own processes, and create procedures for diffusing the process improvement culture instead of merely concentrating of finding tools, techniques, and solutions appropriate for applications from the manufacturing sector

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Percarboxylic acids are commonly used as disinfection and bleaching agents in textile, paper, and fine chemical industries. All of these applications are based on the oxidative potential of these compounds. In spite of high interest in these chemicals, they are unstable and explosive chemicals, which increase the risk of synthesis processes and transportation. Therefore, the safety criteria in the production process should be considered. Microreactors represent a technology that efficiently utilizes safety advantages resulting from small scale. Therefore, microreactor technology was used in the synthesis of peracetic acid and performic acid. These percarboxylic acids were produced at different temperatures, residence times and catalyst i.e. sulfuric acid concentrations. Both synthesis reactions seemed to be rather fast because with performic acid equilibrium was reached in 4 min at 313 K and with peracetic acid in 10 min at 343 K. In addition, the experimental results were used to study the kinetics of the formation of performic acid and peracetic acid. The advantages of the microreactors in this study were the efficient temperature control even in very exothermic reaction and good mixing due to the short diffusion distances. Therefore, reaction rates were determined with high accuracy. Three different models were considered in order to estimate the kinetic parameters such as reaction rate constants and activation energies. From these three models, the laminar flow model with radial velocity distribution gave most precise parameters. However, sulfuric acid creates many drawbacks in this synthesis process. Therefore, a ´´greener´´ way to use heterogeneous catalyst in the synthesis of performic acid in microreactor was studied. The cation exchange resin, Dowex 50 Wx8, presented very high activity and a long life time in this reaction. In the presence of this catalyst, the equilibrium was reached in 120 second at 313 K which indicates a rather fast reaction. In addition, the safety advantages of microreactors were investigated in this study. Four different conventional methods were used. Production of peracetic acid was used as a test case, and the safety of one conventional batch process was compared with an on-site continuous microprocess. It was found that the conventional methods for the analysis of process safety might not be reliable and adequate for radically novel technology, such as microreactors. This is understandable because the conventional methods are partly based on experience, which is very limited in connection with totally novel technology. Therefore, one checklist-based method was developed to study the safety of intensified and novel processes at the early stage of process development. The checklist was formulated using the concept of layers of protection for a chemical process. The traditional and three intensified processes of hydrogen peroxide synthesis were selected as test cases. With these real cases, it was shown that several positive and negative effects on safety can be detected in process intensification. The general claim that safety is always improved by process intensification was questioned.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electrokinetic remediation coupled with Fenton oxidation, widely called as Electrokinetic Fenton process is a potential soil remediation technique used for low permeable soil. The applicability of the process has been proved with soil contaminated with a wide range of organic compounds from phenol to the most recalcitrant ones such as PAHs and POPs. This thesis summarizes the major findings observed during an Electrokinetic Fenton Process study conducted for the remediation of low permeable soil contaminated with HCB, a typical hydrophobic organic contaminant. Model low permeable soil, kaolin, was artificially contaminated with HCB and subjected to Electrokinetic Fenton treatments in a series of laboratory scale batch experiments. The use of cyclodextrins as an enhancement agent to mobilize the sorbed contaminant through the system was investigated. Major process hindrances such as the oxidant availability and treatment duration were also addressed. The HCB degradation along with other parameters like soil pH, redox and cumulative catholyte flow were analyzed and monitored. The results of the experiments strengthen the existing knowledge on electrokinetic Fenton process as a promising technology for the treatment of soil contaminated with hydrophobic organic compounds. It has been demonstrated that HCB sorbed to kaolin can be degraded by the use of high concentrations of hydrogen peroxide during such processes. The overall system performances were observed to be influenced by the point and mode of oxidant delivery. Furthermore, the study contributes to new knowledge in shortening the treatment duration by adopting an electrode polarity reversal during the process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most crucial tasks for a company offering a software product is to decide what new features should be implemented in the product’s forthcoming versions. Yet, existing studies show that this is also a task with which many companies are struggling. This problem has been claimed to be ambiguous and changing. There are better or worse solutions to the problem, but no optimal one. Furthermore, the criteria determining the success of the solution keeps changing due to continuously changing competition, technologies and market needs. This thesis seeks to gain a deeper understanding of the challenges that companies have reportedly faced in determining the requirements for their forthcoming product versions. To this end, product management related activities are explored in seven companies. Following grounded theory approach, the thesis conducts four iterations of data analysis, where each of the iterations goes beyond the previous one. The thesis results in a theory proposal intended to 1) describe the essential characteristics of organizations’ product management challenges, 2) explain the origins of the perceived challenges and 3) suggest strategies to alleviate the perceived challenges. The thesis concludes that current product management approaches are becoming inadequate to deal with challenges that have multiple and conflicting interpretations, different value orientations, unclear goals, contradictions and paradoxes. This inadequacy continues to increase until current beliefs and assumptions about the product management challenges are questioned and a new paradigm for dealing with the challenges is adopted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Itsearviointiraportissa kuvataan Lappeenrannan teknillisen yliopiston konetekniikan kandidaatin ja diplomi-insinöörin tutkinto-ohjelmat opetussuunnitelman 2011-2012 mukaisesti. Itsearviointiraportti on tuotettu saksalaisen ASIIN-akkreditointiorganisaation raportointimallin mukaisesti.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis focuses on integration in project business, i.e. how projectbased companies organize their product and process structures when they deliver industrial solutions to their customers. The customers that invest in these solutions run their businesses in different geographical, political and economical environments, which should be acknowledged by the supplier when providing solutions comprising of larger and more complex scopes than previously supplied to these customers. This means that the suppliers are increasing their supply range by taking over some of the activities in the value chain that have traditionally been handled by the customer. In order to be able to provide the functioning solutions, including more engineering hours, technical equipment and a wider project network, a change is needed in the mindset in order to be able to carry out and take the required responsibility that these new approaches bring. For the supplier it is important to be able to integrate technical products, systems and services, but the supplier also needs to have the capabilities to integrate the cross-functional organizations and departments in the project network, the knowledge and information between and within these organizations and departments, along with inputs from the customer into the product and process structures during the lifecycle of the project under development. Hence, the main objective of this thesis is to explore the challenges of integration that industrial projects meet, and based on that, to suggest a concept of how to manage integration in project business by making use of integration mechanisms. Integration is considered the essential process for accomplishing an industrial project, whereas the accomplishment of the industrial project is considered to be the result of the integration. The thesis consists of an extended summary and four papers, that are based on three studies in which integration mechanisms for value creation in industrial project networks and the management of integration in project business have been explored. The research is based on an inductive approach where in particular the design, commissioning and operations functions of industrial projects have been studied, addressing entire project life-cycles. The studies have been conducted in the shipbuilding and power generation industries where the scopes of supply consist of stand-alone equipment, equipment and engineering, and turnkey solutions. These industrial solutions include demanding efforts in engineering and organization. Addressing the calls for more studies on the evolving value chains of integrated solutions, mechanisms for inter- and intra-organizational integration and subsequent value creation in project networks have been explored. The research results in thirteen integration mechanisms and a typology for integration is proposed. Managing integration consists of integrating the project network (the supplier and the sub-suppliers) and the customer (the customer’s business purpose, operations environment and the end-user) into the project by making use of integration mechanisms. The findings bring new insight into research on industrial project business by proposing integration of technology and engineering related elements with elements related to customer oriented business performance in contemporary project environments. Thirteen mechanisms for combining products and the processes needed to deliver projects are described and categorized according to the impact that they have on the management of knowledge and information. These mechanisms directly relate to the performance of the supplier, and consequently to the functioning of the solution that the project provides. This thesis offers ways to promote integration of knowledge and information during the lifecycle of industrial projects, enhancing the development towards innovative solutions in project business.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.