899 resultados para response surface methodology (RSM)
Resumo:
High aspect ratio polymeric micro-patterns are ubiquitous in many fields ranging from sensors, actuators, optics, fluidics and medical. Second generation PDMS molds are replicated against first generation silicon molds created by deep reactive ion etching. In order to ensure successful demolding, the silicon molds are coated with a thin layer of C[subscript 4]F[subscript 8] plasma polymer to reduce the adhesion force. Peel force and demolding status are used to determine if delamination is successful. Response surface method is employed to provide insights on how changes in coil power, passivating time and gas flow conditions affect plasma polymerization of C[subscript 4]F[subscript 8].
Resumo:
Oviposition behaviour is important when modelling the population dynamics of many invertebrates. The numbers of eggs laid are frequently used to describe fecundity, but this measure may differ significantly from realised fecundity. Oviposition has been shown to be important when describing the dynamics of slug populations, which are important agricultural pests. The numbers of eggs laid by Deroceras reticulatum and their viability were measured across a range of 16 temperature (4, 10, 15 and 23 degrees C) by moisture (33%, 42%, 53% and 58% by dry soil weight) experimental combinations. A fitted quadratic response surface model was used to estimate how D. reticulatum adjusted its egg laying to the surrounding temperature and moisture conditions, with most eggs being laid at a combination of 53% soil moisture and 18 degrees C. The number and proportion of viable eggs also covaried with temperature and moisture, suggesting that D. reticulatum may alter their investment in reproduction to maximise their fitness. We have shown that the number of viable eggs differs from the total number of eggs laid by D. reticulatum. Changes in egg viability with temperature and moisture may also be seen in other species and should be considered when modelling populations of egg-laying invertebrates.
Resumo:
Combinations of drugs are increasingly being used for a wide variety of diseases and conditions. A pre-clinical study may allow the investigation of the response at a large number of dose combinations. In determining the response to a drug combination, interest may lie in seeking evidence of synergism, in which the joint action is greater than the actions of the individual drugs, or of antagonism, in which it is less. Two well-known response surface models representing no interaction are Loewe additivity and Bliss independence, and Loewe or Bliss synergism or antagonism is defined relative to these. We illustrate an approach to fitting these models for the case in which the marginal single drug dose-response relationships are represented by four-parameter logistic curves with common upper and lower limits, and where the response variable is normally distributed with a common variance about the dose-response curve. When the dose-response curves are not parallel, the relative potency of the two drugs varies according to the magnitude of the desired effect and the models for Loewe additivity and synergism/antagonism cannot be explicitly expressed. We present an iterative approach to fitting these models without the assumption of parallel dose-response curves. A goodness-of-fit test based on residuals is also described. Implementation using the SAS NLIN procedure is illustrated using data from a pre-clinical study. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
The evidence provided by modelled assessments of future climate impact on flooding is fundamental to water resources and flood risk decision making. Impact models usually rely on climate projections from global and regional climate models (GCM/RCMs). However, challenges in representing precipitation events at catchment-scale resolution mean that decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs. Here the impacts on projected high flows of differing ensemble approaches and application of Model Output Statistics to RCM precipitation are evaluated while assessing climate change impact on flood hazard in the Upper Severn catchment in the UK. Various ensemble projections are used together with the HBV hydrological model with direct forcing and also compared to a response surface technique. We consider an ensemble of single-model RCM projections from the current UK Climate Projections (UKCP09); multi-model ensemble RCM projections from the European Union's FP6 ‘ENSEMBLES’ project; and a joint probability distribution of precipitation and temperature from a GCM-based perturbed physics ensemble. The ensemble distribution of results show that flood hazard in the Upper Severn is likely to increase compared to present conditions, but the study highlights the differences between the results from different ensemble methods and the strong assumptions made in using Model Output Statistics to produce the estimates of future river discharge. The results underline the challenges in using the current generation of RCMs for local climate impact studies on flooding. Copyright © 2012 Royal Meteorological Society
Resumo:
Future climate change projections are often derived from ensembles of simulations from multiple global circulation models using heuristic weighting schemes. This study provides a more rigorous justification for this by introducing a nested family of three simple analysis of variance frameworks. Statistical frameworks are essential in order to quantify the uncertainty associated with the estimate of the mean climate change response. The most general framework yields the “one model, one vote” weighting scheme often used in climate projection. However, a simpler additive framework is found to be preferable when the climate change response is not strongly model dependent. In such situations, the weighted multimodel mean may be interpreted as an estimate of the actual climate response, even in the presence of shared model biases. Statistical significance tests are derived to choose the most appropriate framework for specific multimodel ensemble data. The framework assumptions are explicit and can be checked using simple tests and graphical techniques. The frameworks can be used to test for evidence of nonzero climate response and to construct confidence intervals for the size of the response. The methodology is illustrated by application to North Atlantic storm track data from the Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel ensemble. Despite large variations in the historical storm tracks, the cyclone frequency climate change response is not found to be model dependent over most of the region. This gives high confidence in the response estimates. Statistically significant decreases in cyclone frequency are found on the flanks of the North Atlantic storm track and in the Mediterranean basin.
Resumo:
A method for the simultaneous determination of the stilbene resveratrol, four phenolic acids (syringic, coumaric, caffeic, and gallic acids), and five flavonoids (catechin, rutin, kaempferol, myricetin, and quercetin) in wine by CE was developed and validated. The CE electrolyte composition and instrumental conditions were optimized using 2(7-3) factorial design and response surface analysis, showing sodium tetraborate, MeOH, and their interaction as the most influential variables. The optimal electrophoretic conditions, minimizing the chromatographic resolution statistic values, consisted of 17 mmol/L sodium tetraborate with 20% methanol as electrolyte, constant voltage of 25 kV, hydrodynamic injection at 50 mbar for 3 s, and temperature of 25 degrees C. The R(2) values for linearity varied from 0.994 to 0.999; LOD and LOQ were 0.1 to 0.3 mg/L and 0.4 to 0.8 mg/L, respectively. The RSDs for migration time and peak area obtained from ten consecutive injections were less than 2% and recoveries varied from 97 to 102%. The method was applied to 23 samples of inexpensive Brazilian wines, showing wide compositional variation.
Resumo:
Neste trabalho é dado ênfase à inclusão das incertezas na avaliação do comportamento estrutural, objetivando uma melhor representação das características do sistema e uma quantificação do significado destas incertezas no projeto. São feitas comparações entre as técnicas clássicas existentes de análise de confiabilidade, tais como FORM, Simulação Direta Monte Carlo (MC) e Simulação Monte Carlo com Amostragem por Importância Adaptativa (MCIS), e os métodos aproximados da Superfície de Resposta( RS) e de Redes Neurais Artificiais(ANN). Quando possível, as comparações são feitas salientando- se as vantagens e inconvenientes do uso de uma ou de outra técnica em problemas com complexidades crescentes. São analisadas desde formulações com funções de estado limite explícitas até formulações implícitas com variabilidade espacial de carregamento e propriedades dos materiais, incluindo campos estocásticos. É tratado, em especial, o problema da análise da confiabilidade de estruturas de concreto armado incluindo o efeito da variabilidade espacial de suas propriedades. Para tanto é proposto um modelo de elementos finitos para a representação do concreto armado que incorpora as principais características observadas neste material. Também foi desenvolvido um modelo para a geração de campos estocásticos multidimensionais não Gaussianos para as propriedades do material e que é independente da malha de elementos finitos, assim como implementadas técnicas para aceleração das avaliações estruturais presentes em qualquer das técnicas empregadas. Para o tratamento da confiabilidade através da técnica da Superfície de Resposta, o algoritmo desenvolvido por Rajashekhar et al(1993) foi implementado. Já para o tratamento através de Redes Neurais Artificias, foram desenvolvidos alguns códigos para a simulação de redes percéptron multicamada e redes com função de base radial e então implementados no algoritmo de avaliação de confiabilidade desenvolvido por Shao et al(1997). Em geral, observou-se que as técnicas de simulação tem desempenho bastante baixo em problemas mais complexos, sobressaindo-se a técnica de primeira ordem FORM e as técnicas aproximadas da Superfície de Resposta e de Redes Neurais Artificiais, embora com precisão prejudicada devido às aproximações presentes.
Resumo:
Este trabalho de conclusão tem como tema a relação entre as especificações de resinas alquídicas e de tintas preparadas com estas resinas, com foco nas características de teor de sólidos e viscosidade, fazendo uso das técnicas projeto de experimentos, superfície de resposta e análise de regressão. O objetivo principal é o estudo dos limites de especificação ideais para a resina alquídica, de forma que os lotes de tinta apresentem propriedades dentro da especificação já ao final do processamento, reduzindo a incidência de lotes fabris com necessidade de ajuste. Como conseqüência, temos redução de retrabalho e lead time fabril, adequação de custo de produtos, maior qualidade intrínseca e maior confiança na sistemática de desenvolvimento, produção e controle de qualidade. Inicialmente, foi realizada uma revisão bibliográfica sobre a tecnologia de tintas e resinas alquídicas, conceitos de controle de qualidade, planejamento de experimentos, análise por superfície de resposta e análise de regressão. Na seqüência, foi conduzido o estudo de caso, realizado na empresa Killing S.A. Tintas e Adesivos, planta localizada na cidade de Novo Hamburgo. Os resultados experimentais indicaram modelos de regressão polinomial válidos para as propriedades avaliadas. Foram tomadas as propriedades teor de sólidos e viscosidade Copo Ford #2 da mistura A+B como parâmetros para análise dos limites de especificação da resina alquídica, onde se comprovou que a variabilidade atualmente permitida é excessiva. A aplicação dos modelos de regressão indicou novos limites de especificação para a resina alquídica, mais estreitos, viabilizando a obtenção de tintas com propriedades especificadas.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Environmental sustainability has become one of the topics of greatest interest in industry, mainly due to effluent generation. Phenols are found in many industries effluents, these industries might be refineries, coal processing, pharmaceutical, plastics, paints and paper and pulp industries. Because phenolic compounds are toxic to humans and aquatic organisms, Federal Resolution CONAMA No. 430 of 13.05.2011 limits the maximum content of phenols, in 0.5 mg.L-1, for release in freshwater bodies. In the effluents treatment, the liquid-liquid extraction process is the most economical for the phenol recovery, because consumes little energy, but in most cases implements an organic solvent, and the use of it can cause some environmental problems due to the high toxicity of this compound. Because of this, exists a need for new methodologies, which aims to replace these solvents for biodegradable ones. Some literature studies demonstrate the feasibility of phenolic compounds removing from aqueous effluents, by biodegradable solvents. In this extraction kind called "Cloud Point Extraction" is used a nonionic surfactant as extracting agent of phenolic compounds. In order to optimize the phenol extraction process, this paper studies the mathematical modeling and optimization of extraction parameters and investigates the effect of the independent variables in the process. A 32 full factorial design has been done with operating temperature and surfactant concentration as independent variables and, parameters extraction: Volumetric fraction of coacervate phase, surfactant and residual concentration of phenol in dilute phase after separation phase and phenol extraction efficiency, as dependent variables. To achieve the objectives presented before, the work was carried out in five steps: (i) selection of some literature data, (ii) use of Box-Behnken model to find out mathematical models that describes the process of phenol extraction, (iii) Data analysis were performed using STATISTICA 7.0 and the analysis of variance was used to assess the model significance and prediction (iv) models optimization using the response surface method (v) Mathematical models validation using additional measures, from samples different from the ones used to construct the model. The results showed that the mathematical models found are able to calculate the effect of the surfactant concentration and the operating temperature in each extraction parameter studied, respecting the boundaries used. The models optimization allowed the achievement of consistent and applicable results in a simple and quick way leading to high efficiency in process operation.
Resumo:
In recent decades, the generation of solid and liquid waste has increased substantially due to increased industrial activity that is directly linked to economic growth. For that is the most efficient process, it is inevitable generation of such wastes. In the oil industry, a major waste generated in oil exploration is produced water, which due to its complex composition and the large amount generated, has become a challenge, given the restrictions imposed by environmental laws regarding their disposal, making if necessary create alternatives for reuse or treatment in order to reduce the content of contaminants and reduce the harmful effects to the environment. This water can be present in free form or emulsified with the oil, when in the form of an emulsion of oil-water type, it is necessary to use chemicals to promote the separation and flotation is the treatment method which has proved to be more efficient, for it can remove much of the emulsified oil when compared to other methods. In this context, the object of this work was to study the individual effects and interactions of some physicochemical parameters of operations, based on previous work to a flotation cell used in the separation of synthetic emulsion oil / water in order to optimize the efficiency of the separation process through of the 24 full factorial design with center point. The response variables to evaluate the separation efficiency was the percentage of color and turbidity removal. The independent variables were: concentration of de-emulsifying, oil content in water, salinity and pH, these being fixed, minimum and maximum limits. The analysis of variance for the equation of the empirical model, was statistically significant and useful for predictive purposes the separation efficiency of the floater with R2 > 90%. The results showed that the oil content in water and the interaction between the oil content in water and salinity, showed the highest values of the estimated effects among all the factors investigated, having great and positive influence on the separation efficiency. By analyzing the response surface was determined maximum removal efficiency above 90% for both measured for turbidity as a measure of color when in a saline medium (30 g/L), the high oil concentrations (306 ppm) using low concentrations of de-emulsifying (1,1 ppm) and at pH close to neutral
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)