879 resultados para Design of Experiments (DOE)
Resumo:
From the customer satisfaction point of view, sound quality of any product has become one of the important factors these days. The primary objective of this research is to determine factors which affect the acceptability of impulse noise. Though the analysis is based on a sample impulse sound file of a Commercial printer, the results can be applied to other similar impulsive noise. It is assumed that impulsive noise can be tuned to meet the accepTable criteria. Thus it is necessary to find the most significant factors which can be controlled physically. This analysis is based on a single impulse. A sample impulsive sound file is tweaked for different amplitudes, background noise, attack time, release time and the spectral content. A two level factorial design of experiments (DOE) is applied to study the significant effects and interactions. For each impulse file modified as per the DOE, the magnitude of perceived annoyance is calculated from the objective metric developed recently at Michigan Technological University. This metric is based on psychoacoustic criteria such as loudness, sharpness, roughness and loudness based impulsiveness. Software called ‘Artemis V11.2’ developed by HEAD Acoustics is used to calculate these psychoacoustic terms. As a result of two level factorial analyses, a new objective model of perceived annoyance is developed in terms of above mentioned physical parameters such as amplitudes, background noise, impulse attack time, impulse release time and the spectral content. Also the effects of the significant individual factors as well as two level interactions are also studied. The results show that all the mentioned five factors affect annoyance level of an impulsive sound significantly. Thus annoyance level can be reduced under the criteria by optimizing the levels. Also, an additional analysis is done to study the effect of these five significant parameters on the individual psychoacoustic metrics.
Resumo:
In recent years there has been a tremendous amount of research in the area of nanotechnology. History tells us that the commercialization of technologies will always be accompanied by both positive and negative effects for society and the environment. Products containing nanomaterials are already available in the market, and yet there is still not much information regarding the potential negative effects that these products may cause. The work presented in this dissertation describes a holistic approach to address different dimensions of nanotechnology sustainability. Life cycle analysis (LCA) was used to study the potential usage of polyethylene filled with nanomaterials to manufacture automobile body panels. Results showed that the nanocomposite does not provide an environmental benefit over traditional steel panels. A new methodology based on design of experiments (DOE) techniques, coupled with LCA, was implemented to investigate the impact of inventory uncertainties. Results showed that data variability does not have a significant effect on the prediction of the environmental impacts. Material profiles for input materials did have a highly significant effect on the overall impact. Energy consumption and material characterization were identified as two mainstreams where additional research is needed in order to predict the overall impact of nanomaterials more effectively. A study was undertaken to gain insights into the behavior of small particles in contact with a surface exposed to air flow to determine particle lift-off from the surface. A mapping strategy was implemented that allows for the identification of conditions for particle liftoff based on particle size and separation distance from the wall. Main results showed that particles smaller than 0:1mm will not become airborne under shear flow unless the separation distance is greater than 15 nm. Results may be used to minimize exposure to airborne materials. Societal implications that may occur in the workplace were researched. This research task explored different topics including health, ethics, and worker perception with the aim of identifying the base knowledge available in the literature. Recommendations are given for different scenarios to describe how workers and employers could minimize the unwanted effects of nanotechnology production.
Resumo:
Methodology and results of full scale maneuvering trials for Riverine Support Patrol Vessel “RSPV”, built by COTECMAR for the Colombian Navy are presented. !is ship is equipped with a “Pump – Jet” propulsion system and the hull corresponds to a wide-hull with a high Beam – Draft ratio (B/T=9.5). Tests were based on the results of simulation of turning diameters obtained from TRIBON M3© design software, applying techniques of Design of Experiments “DOE”, to rationalize the number of runs in di"erent conditions of water depth, ship speed, and rudder angle. Results validate the excellent performance of this class of ship and show that turning diameter and other maneuvering characteristics improve with decreasing water depth.
Resumo:
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.
Resumo:
Graphene, first isolated in 2004 and the subject of the 2010 Nobel Prize in physics, has generated a tremendous amount of research interest in recent years due to its incredible mechanical and electrical properties. However, difficulties in large-scale production and low as-prepared surface area have hindered commercial applications. In this dissertation, a new material is described incorporating the superior electrical properties of graphene edge planes into the high surface area framework of carbon nanotube forests using a scalable and reproducible technology.
The objectives of this research were to investigate the growth parameters and mechanisms of a graphene-carbon nanotube hybrid nanomaterial termed “graphenated carbon nanotubes” (g-CNTs), examine the applicability of g-CNT materials for applications in electrochemical capacitors (supercapacitors) and cold-cathode field emission sources, and determine materials characteristics responsible for the superior performance of g-CNTs in these applications. The growth kinetics of multi-walled carbon nanotubes (MWNTs), grown by plasma-enhanced chemical vapor deposition (PECVD), was studied in order to understand the fundamental mechanisms governing the PECVD reaction process. Activation energies and diffusivities were determined for key reaction steps and a growth model was developed in response to these findings. Differences in the reaction kinetics between CNTs grown on single-crystal silicon and polysilicon were studied to aid in the incorporation of CNTs into microelectromechanical systems (MEMS) devices. To understand processing-property relationships for g-CNT materials, a Design of Experiments (DOE) analysis was performed for the purpose of determining the importance of various input parameters on the growth of g-CNTs, finding that varying temperature alone allows the resultant material to transition from CNTs to g-CNTs and finally carbon nanosheets (CNSs): vertically oriented sheets of few-layered graphene. In addition, a phenomenological model was developed for g-CNTs. By studying variations of graphene-CNT hybrid nanomaterials by Raman spectroscopy, a linear trend was discovered between their mean crystallite size and electrochemical capacitance. Finally, a new method for the calculation of nanomaterial surface area, more accurate than the standard BET technique, was created based on atomic layer deposition (ALD) of titanium oxide (TiO2).
Resumo:
In a industrial environment, to know the process one is working with is crucial to ensure its good functioning. In the present work, developed at Prio Biocombustíveis S.A. facilities, using process data, collected during the present work, and historical process data, the methanol recovery process was characterized, having started with the characterization of key process streams. Based on the information retrieved from the stream characterization, Aspen Plus® process simulation software was used to replicate the process and perform a sensitivity analysis with the objective of accessing the relative importance of certain key process variables (reflux/feed ratio, reflux temperature, reboiler outlet temperature, methanol, glycerol and water feed compositions). The work proceeded with the application of a set of statistical tools, starting with the Principal Components Analysis (PCA) from which the interactions between process variables and their contribution to the process variability was studied. Next, the Design of Experiments (DoE) was used to acquire experimental data and, with it, create a model for the water amount in the distillate. However, the necessary conditions to perform this method were not met and so it was abandoned. The Multiple Linear Regression method (MLR) was then used with the available data, creating several empiric models for the water at distillate, the one with the highest fit having a R2 equal to 92.93% and AARD equal to 19.44%. Despite the AARD still being relatively high, the model is still adequate to make fast estimates of the distillate’s quality. As for fouling, its presence has been noticed many times during this work. Not being possible to directly measure the fouling, the reboiler inlet steam pressure was used as an indicator of the fouling growth and its growth variation with the amount of Used Cooking Oil incorporated in the whole process. Comparing the steam cost associated to the reboiler’s operation when fouling is low (1.5 bar of steam pressure) and when fouling is high (reboiler’s steam pressure of 3 bar), an increase of about 58% occurs when the fouling increases.
Resumo:
In this thesis, we deal with the design of experiments in the drug development process, focusing on the design of clinical trials for treatment comparisons (Part I) and the design of preclinical laboratory experiments for proteins development and manufacturing (Part II). In Part I we propose a multi-purpose design methodology for sequential clinical trials. We derived optimal allocations of patients to treatments for testing the efficacy of several experimental groups by also taking into account ethical considerations. We first consider exponential responses for survival trials and we then present a unified framework for heteroscedastic experimental groups that encompasses the general ANOVA set-up. The very good performance of the suggested optimal allocations, in terms of both inferential and ethical characteristics, are illustrated analytically and through several numerical examples, also performing comparisons with other designs proposed in the literature. Part II concerns the planning of experiments for processes composed of multiple steps in the context of preclinical drug development and manufacturing. Following the Quality by Design paradigm, the objective of the multi-step design strategy is the definition of the manufacturing design space of the whole process and, as we consider the interactions among the subsequent steps, our proposal ensures the quality and the safety of the final product, by enabling more flexibility and process robustness in the manufacturing.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
As operações de separação por adsorção têm vindo a ganhar importância nos últimos anos, especialmente com o desenvolvimento de técnicas de simulação de leitos móveis em colunas, tal como a cromatografia de Leito Móvel Simulado (Simulated Moving Bed, SMB). Esta tecnologia foi desenvolvida no início dos anos 60 como método alternativo ao processo de Leito Móvel Verdadeiro (True Moving Bed, TMB), de modo a resolver vários dos problemas associados ao movimento da fase sólida, usuais nestes métodos de separação cromatográficos de contracorrente. A tecnologia de SMB tem sido amplamente utilizada em escala industrial principalmente nas indústrias petroquímica e de transformação de açúcares e, mais recentemente, na indústria farmacêutica e de química fina. Nas últimas décadas, o crescente interesse na tecnologia de SMB, fruto do alto rendimento e eficiente consumo de solvente, levou à formulação de diferentes modos de operação, ditos não convencionais, que conseguem unidades mais flexíveis, capazes de aumentar o desempenho de separação e alargar ainda mais a gama de aplicação da tecnologia. Um dos exemplos mais estudados e implementados é o caso do processo Varicol, no qual se procede a um movimento assíncrono de portas. Neste âmbito, o presente trabalho foca-se na simulação, análise e avaliação da tecnologia de SMB para dois casos de separação distintos: a separação de uma mistura de frutose-glucose e a separação de uma mistura racémica de pindolol. Para ambos os casos foram considerados e comparados dois modos de operação da unidade de SMB: o modo convencional e o modo Varicol. Desta forma, foi realizada a implementação e simulação de ambos os casos de separação no simulador de processos Aspen Chromatography, mediante a utilização de duas unidades de SMB distintas (SMB convencional e SMB Varicol). Para a separação da mistura frutose-glucose, no quediz respeito à modelização da unidade de SMB convencional, foram utilizadas duas abordagens: a de um leito móvel verdadeiro (modelo TMB) e a de um leito móvel simulado real (modelo SMB). Para a separação da mistura racémica de pindolol foi considerada apenas a modelização pelo modelo SMB. No caso da separação da mistura frutose-glucose, procedeu-se ainda à otimização de ambas as unidades de SMB convencional e Varicol, com o intuito do aumento das suas produtividades. A otimização foi realizada mediante a aplicação de um procedimento de planeamento experimental, onde as experiências foram planeadas, conduzidas e posteriormente analisadas através da análise de variância (ANOVA). A análise estatística permitiu selecionar os níveis dos fatores de controlo de modo a obter melhores resultados para ambas as unidades de SMB.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Sugar esters are substances which possess surfactant, antifungical and bactericidal actions and can be obtained through two renewable sources of raw materials: sugars and vegetable oils. Their excellent biodegradability, allied to lhe fact that they are non toxic, insipid, inodorous, biocompatible, no-ionic, digestible and because they can resist to adverse conditions of temperature, pH and salinity, explain lhe crescent use of these substances in several sections of lhe industry. The objective of this thesis was to synthesize and characterize surfactants and polymers containing sugar branched in their structures, through enzymatic transesterification of vinyl esters and sugars, using alkaline protease from Bacillus subtilis as catalyst, in organic medium (DMF).Three types of sugars were used: L-arabinose, D-glucose and sucrose and two types of vinyl esters: vinyl laurate and vinyl adipate. Aiming to reach high conversions from substrates to products for a possible future large scale industrial production, a serie of variables was optimized, through Design of Experiments (DOE), using Response Surface Methodology (RSM).The investigated variables were: (1) enzyme concentration; (2) molar reason of substrates; (3) water/solvent rale; (4) temperature and (5) time. We obtained six distinct sugar esters: 5-0-lauroyl L-arabinose, 6-0-lauroyl D-glucose, 1'-O-lauroyl sucrose, 5-0-vinyladipoyl L-arabinose, 6-0-vinyladipoyl D-glucose and 1 '-O-vinyladipoyl sucrose, being lhe last three polymerizable. The progress of lhe reaction was monitored by HPLC analysis, through lhe decrease of sugar concentration in comparison to lhe blank. Qualitative analysis by TLC confirmed lhe formation of lhe products. In lhe purification step, two methodologies were adopted: (1) chromatographic column and (2) extraction with hot acetone. The acylation position and lhe chemical structure were determined by 13C-RMN. The polymerization of lhe three vinyl sugar esters was possible, through chemical catalysis, using H2O2 and K2S2O8 as initiators, at 60°C, for 24 hours. IR spectra of lhe monomers and respective polymers were compared revealing lhe disappearance of lhe vinyl group in lhe polymer spectra. The molar weights of lhe polymers were determined by GPC and presented lhe following results: poly (5-0-vinyladipoyl L-arabinose): Mw = 7.2 X 104; PD = 2.48; poly (6-0-vinyladipoyl D-glucose): Mw = 2.7 X 103; PD = 1.75 and poly (1'-O-vinyladipoyl sucrose): Mw = 4.2 X 104; PD = 6.57. The six sugar esters were submitted to superficial tension tests for determination of the critical micelle concentrations (CMC), which varied from 122 to 167 ppm. Finally, a study of applicability of these sugar esters, as lubricants for completion fluids of petroleum wells was' accomplished through comparative analysis of lhe efficiency of these sugar esters, in relation to three commercial lubricants. The products synthesized in this thesis presented equivalent or superior action to lhe tested commercial products
Resumo:
The clay swelling is today one of the major problems during the well drilling. Nearly 50% of clays that constitute shale expand easily in the presence of water molecules. During the drilling of a geological formation containing swelling clays, when is feasible the use of water base fluids, it is necessary to apply clay inhibitors. This avoids the incorporation of the cutting to the drilling fluid which is responsible for the wall swelling and crumbling. The aim of this work was to evaluate the synergistic behavior that occurs when swelling clay inhibitors are associated to NaCl and KCl salts. Three swelling clay inhibitors samples, INIB A, INIB B and INIB C, were analyzed. Each inhibitor was characterized by the amount of chlorides and active matter content. For the water-clay interaction evaluation in the presence of various fluids, it was used the Capillary Suction Timer (CST, Fann) and Linear Swell Meter (LSM 2000, Fann). For better interpretation of results, a Design of Experiments (DOE, Umetrics MODDE 7.0 TM) through Result Surface Methodology (RSM) was employed, taking into account the type, the swelling inhibitors concentration and the contact time with the clay. The results showed different efficiencies among the inhibitors employed, and the salt-inhibitors mixtures were more efficient than those products alone. However, for field operation, other parameters should be taking into account, as operational cost, environmental requests and time of application for each product
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This dissertation has as main theme the discuss about how the use of mathematical models for process optimization. The current scenario of strong competition to conquer the consumer market necessitates the development of improvements to better performance of the process as a whole, is to reduce costs, increase efficiency or effectiveness. Thus, the use of methodologies to assist in this process is becoming increasingly viable. Methodologies developed in the past are being studied and improved. An example is the Desirability, the object of the present study, which was developed in the 80's and has been improved over time. To understand and study this methodology was applied to the desirability function in three instances, where it was used Design of Experiments (DOE), taken from scientific papers, using the Solver tool (Excel ®) and desirability (Minitab ®). Thus, in addition to studying the methodology, it was possible to compare the performance of tools used for optimization in different situations. From the results of this study, it was possible to validate the superiority of one of the models studied compared fairly
Resumo:
The hardness has an important role in quality control, in research studies and metallurgical and mechanical specification, selection and comparison of various materials. This property is of extreme importance in the oil industry because it is a determining factor to ascertain the safety of the material used in pressure vessels and pipelines. Due to the inability to stop the equipment while checking the hardness, the hardness testers are widely used portable method UCI, its great advantage is the fact that an essay fast, simple realization and not be considered a non-destructive testing with a good relationship money. The objective is to determine if there is significant difference in hardness measurements between 80 and 1200 sandpaper using a portable hardness tester UCI method, the material applied in gas storage spheres composition ASTM 516 Gr 70. After determining the number of homogeneity, we performed the hardness profile to isolate the major factors influencing the hardness part: cold rolling and segregation of impurities. Factors Cooling and sanding were analyzed using the method of design of experiments (DOE), in which it was demonstrated that neither variables nor their interactions, has significant influence on the hardness measurements by portable MIC 10. This fact will lead to reduction in time and cost for surface preparation