861 resultados para Process Analytical Technology (PAT)
Resumo:
Tässä työssä suunniteltiin lappeenrantalaisen Astex Oy:n tilauksesta liikkuvan työkoneen runkorakenne. Työ tehtiin Lappeenrannan teknillisen yliopiston teräsrakenteiden laboratoriossa. Suunniteltava rakenne kuului linkkuohjattuun, noin 5000 kg painoiseen trukkitraktoriin. Lähtökohtana rakenteen suunnittelulle olivat tilaajan asettamat rakenteen geometriaan ja suorituskykyyn liittyvät reunaehdot ja rajoitteet. Uuden rakenteen suunnittelussa hyödynnettiin myös tilaajatahon kehittelemää vastaavan tyyppistä prototyyppirakennetta. Rakenteen suunnitteluprosessi koostui neljästä eri vaiheesta. Suunniteltavan rakenteen lähtökohtana olleelle prototyyppirakenteelle suoritettiin koneen käytönaikaisia venymäliuskamittauksia rakenteen kuormitushistorian selvittämiseksi. Mittauksista saatujen tulosten perusteella määritettiin kuormitukset uudelle suunniteltavalle rakenteelle. Määritettyjä kuormituksia hyödyntäen ideoitiin, suunniteltiin ja mallinnettiin uusi tilaajan vaatimuksia vastaava rakenne. Uudelle rakenteelle suoritettiin lujuusanalyysit FE-analyysiä hyödyntäen. Uuden rakenteen suunnittelussa kiinnitettiin huomiota rakenteen hyvään valmistettavuuteen ja suunniteltiin rakenneratkaisut tilaajataholle mahdollisimman optimaalisiksi. Suunnittelu- ja mallinnustyö tehtiin Solidworks 2014 ohjelmistolla. Rakenteen lujuustekniset tarkastelut sisälsivät rakennedetaljien analyyttistä mitoitusta ja laskentaa. FE-laskennalla selvitettiin rakenteen ääri- ja väsymiskestävyys. Laskenta sisälsi koko rakenteen globaaleja tarkasteluja, sekä eri kriittisten rakennedetaljien yksityiskohtaisempia analyysejä. FE-laskennan pääpaino oli rakenteen väsymisanalyyseissä, jotka toteutettiin Hot-Spot- ja särönkasvumenetelmillä. Rakenteen FE-analyysien suorittamisessa käytettiin Femap, NxNastran ja Abaqus-ohjelmistoja.
Resumo:
Speed, uncertainty and complexity are increasing in the business world all the time. When knowledge and skills become quickly irrelevant, new challenges are set for information technology (IT) education. Meta-learning skills – learning how to learn rapidly - and innovation skills have become more essential than single technologies or other specific issues. The drastic changes in the information and communications technology (ICT) sector have caused a need to reconsider how IT Bachelor education in Universities of Applied Sciences should be organized and employed to cope with the change. The objective of the study was to evaluate how a new approach to IT Bachelor education, the ICT entrepreneurship study path (ICT-ESP) fits IT Bachelor education in a Finnish University of Applied Sciences. This kind of educational arrangement has not been employed elsewhere in the context of IT Bachelor education. The study presents the results of a four-year period during which IT Bachelor education was renewed in a Finnish University of Applied Sciences. The learning environment was organized into an ICT-ESP based on Nonaka’s knowledge theory and Kolb’s experiental learning. The IT students who studied in the ICT-ESP established a cooperative and learned ICT by running their cooperative at the University of Applied Sciences. The students (called team entrepreneurs) studied by reading theory in books and other sources of explicit information, doing projects for their customers, and reflecting in training sessions on what was learnt by doing and by studying the literature. Action research was used as the research strategy in this study. Empirical data was collected via theme-based interviews, direct observation, and participative observation. Grounded theory method was utilized in the data analysis and the theoretical sampling was used to guide the data collection. The context of the University of Applied Sciences provided a good basis for fostering team entrepreneurship. However, the results showed that the employment of the ICT-ESP did not fit into the IT Bachelor education well enough. The ICT-ESP was cognitively too tough for the team entrepreneurs because they had two different set of rules to follow in their studies. The conventional courses engaged lot of energy which should have been spent for professional development in the ICT-ESP. The amount of competencies needed in the ICT-ESP for professional development was greater than those needed for any other ways of studying. The team entrepreneurs needed to develop skills in ICT, leadership and self-leadership, team development and entrepreneurship skills. The entrepreneurship skills included skills on marketing and sales, brand development, productization, and business administration. Considering the three-year time the team entrepreneurs spent in the ICT-ESP, the challenges were remarkable. Changes to the organization of IT Bachelor education are also suggested in the study. At first, it should be admitted that the ICT-ESP produces IT Bachelors with a different set of competencies compared to the conventional way of educating IT Bachelors. Secondly, the number of courses on general topics in mathematics, physics, and languages for team entrepreneurs studying in the ICTESP should be reconsidered and the conventional course-based teaching of the topics should be reorganized to support the team coaching process of the team entrepreneurs with their practiceoriented projects. Third, the upcoming team entrepreneurs should be equipped with relevant information about the ICT-ESP and what it would require in practice to study as a team entrepreneur. Finally, the upcoming team entrepreneurs should be carefully selected before they start in the ICT-ESP to have a possibility to eliminate solo players and those who have a too romantic view of being a team entrepreneur. The results gained in the study provided answers to the original research questions and the objectives of the study were met. Even though the IT degree programme was terminated during the research process, the amount of qualitative data gathered made it possible to justify the interpretations done.
Resumo:
The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.
Resumo:
Nowadays global business trends force the adoption of innovative ICTs into the supply chain management (SCM). Particularly, the RFID technology is on high demand among SCM professionals due to its business advantages such as improving of accuracy and veloc-ity of SCM processes which lead to decrease of operational costs. Nevertheless, a question of the RFID technology impact on the efficiency of warehouse processes in the SCM re-mains open. The goal of the present study is to experiment the possibility of improvement order picking velocity in a warehouse of a big logistics company with the use of the RFID technology. In order to achieve this goal the following objectives have been developed: 1) Defining the scope of the RFID technology applications in the SCM; 2) Justification of the RFID technology impact on the SCM processes; 3) Defining a place of the warehouse order picking process in the SCM; 4) Identification and systematization of existing meth-ods of order picking velocity improvement; 5) Choosing of the study object and gathering of the empirical data about number of orders, number of hours spent per each order line daily during 5 months; 6) Processing and analysis of the empirical data; 7) Conclusion about the impact of the RFID technology on the speed of order picking process. As a result of the research it has been found that the speed of the order picking processes has not been changed as time has gone after the RFID adoption. It has been concluded that in order to achieve a positive effect in the speed of order picking process with the use of the RFID technology it is necessary to simultaneously implement changes in logistics and organizational management in 3PL logistics companies. Practical recommendations have been forwarded to the management of the company for further investigation and procedure.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
Effective control and limiting of carbon dioxide (CO₂) emissions in energy production are major challenges of science today. Current research activities include the development of new low-cost carbon capture technologies, and among the proposed concepts, chemical combustion (CLC) and chemical looping with oxygen uncoupling (CLOU) have attracted significant attention allowing intrinsic separation of pure CO₂ from a hydrocarbon fuel combustion process with a comparatively small energy penalty. Both CLC and CLOU utilize the well-established fluidized bed technology, but several technical challenges need to be overcome in order to commercialize the processes. Therefore, development of proper modelling and simulation tools is essential for the design, optimization, and scale-up of chemical looping-based combustion systems. The main objective of this work was to analyze the technological feasibility of CLC and CLOU processes at different scales using a computational modelling approach. A onedimensional fluidized bed model frame was constructed and applied for simulations of CLC and CLOU systems consisting of interconnected fluidized bed reactors. The model is based on the conservation of mass and energy, and semi-empirical correlations are used to describe the hydrodynamics, chemical reactions, and transfer of heat in the reactors. Another objective was to evaluate the viability of chemical looping-based energy production, and a flow sheet model representing a CLC-integrated steam power plant was developed. The 1D model frame was succesfully validated based on the operation of a 150 kWth laboratory-sized CLC unit fed by methane. By following certain scale-up criteria, a conceptual design for a CLC reactor system at a pre-commercial scale of 100 MWth was created, after which the validated model was used to predict the performance of the system. As a result, further understanding of the parameters affecting the operation of a large-scale CLC process was acquired, which will be useful for the practical design work in the future. The integration of the reactor system and steam turbine cycle for power production was studied resulting in a suggested plant layout including a CLC boiler system, a simple heat recovery setup, and an integrated steam cycle with a three pressure level steam turbine. Possible operational regions of a CLOU reactor system fed by bituminous coal were determined via mass, energy, and exergy balance analysis. Finally, the 1D fluidized bed model was modified suitable for CLOU, and the performance of a hypothetical 500 MWth CLOU fuel reactor was evaluated by extensive case simulations.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
Nykyaikaiset pilvipalvelut tarjoavat suurille yrityksille mahdollisuuden tehostaa laskennallista tietojenkäsittelyä. Pilvipalveluiden käyttöönotto tuo mukanaan kuitenkin esimerkiksi useita tietoturvakysymyksiä, joiden vuoksi käyttöönoton tulee olla tarkasti suunniteltua. Tämä tutkimus esittelee kirjallisuuskatsaukseen perustuvan, asteittaisen suunnitelman pilvipalveluiden käyttöönotolle energialiiketoimintaympäristössä. Kohdeyrityksen sisäiset haastattelut ja katsaus nykyisiin energiateollisuuden pilviratkaisuihin muodostavat kokonaiskuvan käyttöönoton haasteista ja mahdollisuuksista. Tutkimuksen päätavoitteena on esittää ratkaisut tyypillisiin pilvipalvelun käyttöönotossa esiintyviin ongelmiin käyttöönottomallin avulla. Tutkimuksessa rakennettu käyttöönottomalli testattiin esimerkkitapauksen avulla ja malli todettiin toimivaksi. Ulkoisten palveluiden herättämien tietoturvakysymysten takia käyttöönoton ensimmäiset osiot, kuten lopputuotteen määrittely ja huolellinen suunnittelu, ovat koko käyttöönottoprosessin ydin. Lisäksi pilvipalveluiden käyttöönotto vaatii nykyiseltä käyttöympäristöltä uusia teknisiä ja hallinnollisia taitoja. Tutkimuksen tulokset osoittavat pilvipalveluiden monipuolisen hyödyn erityisesti laskentatehon tarpeen vaihdellessa. Käyttöönottomallin rinnalle luotu kustannusvertailu tukee kirjallisuuskatsauksessa esille tuotuja hyötyjä ja tarjoaa kohdeyritykselle perusteen tutkimuksen eteenpäin viemiselle.
Resumo:
This thesis presents an analysis of recently enacted Russian renewable energy policy based on capacity mechanism. Considering its novelty and poor coverage by academic literature, the aim of the thesis is to analyze capacity mechanism influence on investors’ decision-making process. The current research introduces a number of approaches to investment analysis. Firstly, classical financial model was built with Microsoft Excel® and crisp efficiency indicators such as net present value were determined. Secondly, sensitivity analysis was performed to understand different factors influence on project profitability. Thirdly, Datar-Mathews method was applied that by means of Monte Carlo simulation realized with Matlab Simulink®, disclosed all possible outcomes of investment project and enabled real option thinking. Fourthly, previous analysis was duplicated by fuzzy pay-off method with Microsoft Excel®. Finally, decision-making process under capacity mechanism was illustrated with decision tree. Capacity remuneration paid within 15 years is calculated individually for each RE project as variable annuity that guarantees a particular return on investment adjusted on changes in national interest rates. Analysis results indicate that capacity mechanism creates a real option to invest in renewable energy project by ensuring project profitability regardless of market conditions if project-internal factors are managed properly. The latter includes keeping capital expenditures within set limits, production performance higher than 75% of target indicators, and fulfilling localization requirement, implying producing equipment and services within the country. Occurrence of real option shapes decision-making process in the following way. Initially, investor should define appropriate location for a planned power plant where high production performance can be achieved, and lock in this location in case of competition. After, investor should wait until capital cost limit and localization requirement can be met, after that decision to invest can be made without any risk to project profitability. With respect to technology kind, investment into solar PV power plant is more attractive than into wind or small hydro power, since it has higher weighted net present value and lower standard deviation. However, it does not change decision-making strategy that remains the same for each technology type. Fuzzy pay-method proved its ability to disclose the same patterns of information as Monte Carlo simulation. Being effective in investment analysis under uncertainty and easy in use, it can be recommended as sufficient analytical tool to investors and researchers. Apart from described results, this thesis contributes to the academic literature by detailed description of capacity price calculation for renewable energy that was not available in English before. With respect to methodology novelty, such advanced approaches as Datar-Mathews method and fuzzy pay-off method are applied on the top of investment profitability model that incorporates capacity remuneration calculation as well. Comparison of effects of two different RE supporting schemes, namely Russian capacity mechanism and feed-in premium, contributes to policy comparative studies and exhibits useful inferences for researchers and policymakers. Limitations of this research are simplification of assumptions to country-average level that restricts our ability to analyze renewable energy investment region wise and existing limitation of the studying policy to the wholesale power market that leaves retail markets and remote areas without our attention, taking away medium and small investment into renewable energy from the research focus. Elimination of these limitations would allow creating the full picture of Russian renewable energy investment profile.
Resumo:
cDNA microarray is an innovative technology that facilitates the analysis of the expression of thousands of genes simultaneously. The utilization of this methodology, which is rapidly evolving, requires a combination of expertise from the biological, mathematical and statistical sciences. In this review, we attempt to provide an overview of the principles of cDNA microarray technology, the practical concerns of the analytical processing of the data obtained, the correlation of this methodology with other data analysis methods such as immunohistochemistry in tissue microarrays, and the cDNA microarray application in distinct areas of the basic and clinical sciences.
Resumo:
Pikamallinnustekniikat ovat kehittyneet viime vuosina nopeasti. Tämä antaa jo lähes rajat-tomat mahdollisuudet tuottaa 3D-tulostamalla erilaisia tuotteita. 3D-tulostuksen hyödyntä-minen on yleistynyt erityisesti teollisuuden ja teknologian aloilla. Tässä työssä tutkittiin miten 3D-tulostamista voidaan hyödyntää diagnostisten pikatestien tuotekehityksessä. Immunologinen lateral flow-testi on vasta-aineisiin perustuva, nopea ja helppokäyttöinen mittausmenetelmä pienten ainemäärien havaitsemiseen. Tässä työssä kehitettiin lateral flow-testikotelo, jonka suunnitteluun ja rakenteen mallintamiseen käytettiin 3D-tulostustekniikkaa. Testikotelon toimivuus lateral flow- testissä varmistettiin kehittämällä testikoteloon sopiva pikatesti, jonka suorituskykyä analysoitiin sekä visuaalisesti että Actim 1ngeni-lukulaitteella. Työ aloitettiin tutkimalla eri pikavalmistustekniikoita, joista testikotelon tulostamiseen valittiin SLA-tekniikka sen tulostustarkkuuden ja tuotteen pinnan laadun perusteella. Testikotelon suunnittelu aloitettiin määrittämällä millaisia ominaisuuksia testikotelolta haluttiin. Näitä ominaisuuksia olivat lateral flow-testin suojaaminen sekä testissä kulkevan näytteen virtauksen varmistamien. Lateral flow- testin kehityksessä hyödynnettiin osin aiemmin kehitetystä pikatestistä saatuja tietoja. Lateral flow- kasettitestin valmistusprosessi koostui seitsemästä eri prosessivaiheesta jotka olivat: Vasta-aineen/kontrollireagenssin konjugointi, näytetyynyn käsittely, konjugointityynyn käsittely, konjugointityynylle annostelu, membraanille annostelu, tikkujen laminointi ja leikkaus sekä kasettitestin kokoonpano. Kehitetyn lateral flow- kasettitestin toimivuus varmistettiin tutkimalla testin reaktiokinetiikkaa ja analyyttistä herkkyyttä sekä visuaalisesti että lukulaitteen avulla. Tutkimustulosten perusteella 3D-tulostus on erittäin hyödyllinen menetelmä pikatestien tuotekehityksessä suunniteltaessa testikotelorakenteita, näytteen annosteluvälineitä ja näiden yhdistelmiä.
Resumo:
Atomic Layer Deposition (ALD) is the technology of choice where very thin and highquality films are required. Its advantage is its ability to deposit dense and pinhole-free coatings in a controllable manner. It has already shown promising results in a range of applications, e.g. diffusion barrier coatings for OLED displays, surface passivation layers for solar panels. Spatial Atomic Layer Deposition (SALD) is a concept that allows a dramatic increase in ALD throughput. During the SALD process, the substrate moves between spatially separated zones filled with the respective precursor gases and reagents in such a manner that the exposure sequence replicates the conventional ALD cycle. The present work describes the development of a high-throughput ALD process. Preliminary process studies were made using an SALD reactor designed especially for this purpose. The basic properties of the ALD process were demonstrated using the wellstudied Al2O3 trimethyl aluminium (TMA)+H2O process. It was shown that the SALD reactor is able to deposit uniform films in true ALD mode. The ALD nature of the process was proven by demonstrating self-limiting behaviour and linear film growth. The process behaviour and properties of synthesized films were in good agreement with previous ALD studies. Issues related to anomalous deposition at low temperatures were addressed as well. The quality of the coatings was demonstrated by applying 20 nm of the Al2O3 on to polymer substrate and measuring its moisture barrier properties. The results of tests confirmed the superior properties of the coatings and their suitability for flexible electronics encapsulation. Successful results led to the development of a pilot scale roll-to-roll coating system. It was demonstrated that the system is able to deposit superior quality films with a water transmission rate of 5x10-6 g/m2day at a web speed of 0.25 m/min. That is equivalent to a production rate of 180 m2/day and can be potentially increased by using wider webs. State-of-art film quality, high production rates and repeatable results make SALD the technology of choice for manufacturing ultra-high barrier coatings for flexible electronics.
Resumo:
In this work the separation of multicomponent mixtures in counter-current columns with supercritical carbon dioxide has been investigated using a process design methodology. First the separation task must be defined, then phase equilibria experiments are carried out, and the data obtained are correlated with thermodynamic models or empirical functions. Mutual solubilities, Ki-values, and separation factors aij are determined. Based on this data possible operating conditions for further extraction experiments can be determined. Separation analysis using graphical methods are performed to optimize the process parameters. Hydrodynamic experiments are carried out to determine the flow capacity diagram. Extraction experiments in laboratory scale are planned and carried out in order to determine HETP values, to validate the simulation results, and to provide new materials for additional phase equilibria experiments, needed to determine the dependence of separation factors on concetration. Numerical simulation of the separation process and auxiliary systems is carried out to optimize the number of stages, solvent-to-feed ratio, product purity, yield, and energy consumption. Scale-up and cost analysis close the process design. The separation of palmitic acid and (oleic+linoleic) acids from PFAD-Palm Fatty Acids Distillates was used as a case study.
Resumo:
Sales and operations research publications have increased significantly in the last decades. The concept of sales and operations planning (S&OP) has gained increased recognition and has been put forward as the area within Supply Chain Management (SCM). Development of S&OP is based on the need for determining future actions, both for sales and operations, since off-shoring, outsourcing, complex supply chains and extended lead times make challenges for responding to changes in the marketplace when they occur. Order intake of the case company has grown rapidly during the last years. Along with the growth, new challenges considering data management and information flow have arisen due to increasing customer orders. To manage these challenges, case company has implemented S&OP process, though initial process is in early stage and due to this, the process is not managing the increased customer orders adequately. Thesis objective is to explore extensively the S&OP process content of the case company and give further recommendations. Objectives are categorized into six different groups, to clarify the purpose of this thesis. Qualitative research methods used are active participant observation, qualitative interviews, enquiry, education, and a workshop. It is notable that demand planning was felt as cumbersome, so it is typically the biggest challenge in S&OP process. More proactive the sales forecasting can be, more expanded the time horizon of operational planning will turn out. S&OP process is 60 percent change management, 30 percent process development and 10 percent technology. The change management and continuous improvement can sometimes be arduous and set as secondary. It is important that different people are required to improve the process and the process is constantly evaluated. As well as, process governance is substantially in a central role and it has to be managed consciously. Generally, S&OP process was seen important and all the stakeholders were committed to the process. Particular sections were experienced more important than others, depending on the stakeholders’ point of views. Recommendations to objective groups are evaluated by the achievable benefit and resource requirement. The urgent and easily implemented improvement recommendations should be executed firstly. Next steps are to develop more coherent process structure and refine cost awareness. Afterwards demand planning, supply planning, and reporting should be developed more profoundly. For last, information technology system should be implemented to support the process phases.
Resumo:
Este trabalho foi conduzido para avaliar o efeito da incorporação das globinas bovina extraída pelo método da acetona acidificada (GT) e pelo da carboximetilcelulose (GCMC) e do caseinato de sódio (CA) sobre a composição química e as qualidades microbiológica e sensorial do patê de presunto. Foi, ainda, avaliada a estabilidade do produto durante os 45 dias de estocagem sob refrigeração, por meio das determinações de pH e do grau de oxidação lipídica. De acordo com os resultados obtidos, pôde-se verificar que a adição dessas proteínas elevou o teor protéico das amostras analisadas. Além disso, observou-se que apenas a GT provocou uma queda do pH e uma elevação de substâncias reativas ao ácido 2-tiobarbitúrico (thiobarbiuric acid reactive substances - TBARS). Com relação à qualidade microbiológica dos produtos, não foram observadas alterações após a incorporação das proteínas e, pela análise sensorial, foram identificadas diferenças significativas entre as duas formulações de patê avaliadas (PCA e PGCMC).