919 resultados para Process control -- Statistical methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos a indústria de semicondutores, nomeadamente a produção de memórias, tem sofrido uma grande evolução. A necessidade de baixar custos de produção, assim como de produzir sistemas mais complexos e com maior capacidade, levou à criação da tecnologia WLP (Wafer Level Packaging). Esta tecnologia permite a produção de sistemas mais pequenos, simplificar o fluxo do processo e providenciar uma redução significativa do custo final do produto. A WLP é uma tecnologia de encapsulamento de circuitos integrados quando ainda fazem parte de wafers (bolachas de silício), em contraste com o método tradicional em que os sistemas são individualizados previamente antes de serem encapsulados. Com o desenvolvimento desta tecnologia, surgiu a necessidade de melhor compreender o comportamento mecânico do mold compound (MC - polímero encapsulante) mais especificamente do warpage (empeno) de wafers moldadas. O warpage é uma característica deste produto e deve-se à diferença do coeficiente de expansão térmica entre o silício e o mold compound. Este problema é observável no produto através do arqueamento das wafers moldadas. O warpage de wafers moldadas tem grande impacto na manufatura. Dependendo da quantidade e orientação do warpage, o transporte, manipulação, bem como, a processamento das wafers podem tornar-se complicados ou mesmo impossíveis, o que se traduz numa redução de volume de produção e diminuição da qualidade do produto. Esta dissertação foi desenvolvida na Nanium S.A., empresa portuguesa líder mundial na tecnologia de WLP em wafers de 300mm e aborda a utilização da metodologia Taguchi, no estudo da variabilidade do processo de debond para o produto X. A escolha do processo e produto baseou-se numa análise estatística da variação e do impacto do warpage ao longo doprocesso produtivo. A metodologia Taguchi é uma metodologia de controlo de qualidade e permite uma aproximação sistemática num dado processo, combinando gráficos de controlo, controlo do processo/produto, e desenho do processo para alcançar um processo robusto. Os resultados deste método e a sua correta implementação permitem obter poupanças significativas nos processos com um impacto financeiro significativo. A realização deste projeto permitiu estudar e quantificar o warpage ao longo da linha de produção e minorar o impacto desta característica no processo de debond. Este projecto permitiu ainda a discussão e o alinhamento entre as diferentes áreas de produção no que toca ao controlo e a melhoria de processos. Conseguiu–se demonstrar que o método Taguchi é um método eficiente no que toca ao estudo da variabilidade de um processo e otimização de parâmetros. A sua aplicação ao processo de debond permitiu melhorar ou a fiabilidade do processo em termos de garantia da qualidade do produto, como ao nível do aumento de produção.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pollution of water with pesticides has become a threat to the man, material and environment. The pesticides released to the environment reach the water bodies through run off. Industrial wastewater from pesticide manufacturing industries contains pesticides at higher concentration and hence a major source of water pollution. Pesticides create a lot of health and environmental hazards which include diseases like cancer, liver and kidney disorders, reproductive disorders, fatal death, birth defects etc. Conventional wastewater treatment plants based on biological treatment are not efficient to remove these compounds to the desired level. Most of the pesticides are phyto-toxic i.e., they kill the microorganism responsible for the degradation and are recalcitrant in nature. Advanced oxidation process (AOP) is a class of oxidation techniques where hydroxyl radicals are employed for oxidation of pollutants. AOPs have the ability to totally mineralise the organic pollutants to CO2 and water. Different methods are employed for the generation of hydroxyl radicals in AOP systems. Acetamiprid is a neonicotinoid insecticide widely used to control sucking type insects on crops such as leafy vegetables, citrus fruits, pome fruits, grapes, cotton, ornamental flowers. It is now recommended as a substitute for organophosphorous pesticides. Since its use is increasing, its presence is increasingly found in the environment. It has high water solubility and is not easily biodegradable. It has the potential to pollute surface and ground waters. Here, the use of AOPs for the removal of acetamiprid from wastewater has been investigated. Five methods were selected for the study based on literature survey and preliminary experiments conducted. Fenton process, UV treatment, UV/ H2O2 process, photo-Fenton and photocatalysis using TiO2 were selected for study. Undoped TiO2 and TiO2 doped with Cu and Fe were prepared by sol-gel method. Characterisation of the prepared catalysts was done by X-ray diffraction, scanning electron microscope, differential thermal analysis and thermogravimetric analysis. Influence of major operating parameters on the removal of acetamiprid has been investigated. All the experiments were designed using central compoiste design (CCD) of response surface methodology (RSM). Model equations were developed for Fenton, UV/ H2O2, photo-Fenton and photocatalysis for predicting acetamiprid removal and total organic carbon (TOC) removal for different operating conditions. Quality of the models were analysed by statistical methods. Experimental validations were also done to confirm the quality of the models. Optimum conditions obtained by experiment were verified with that obtained using response optimiser. Fenton Process is the simplest and oldest AOP where hydrogen peroxide and iron are employed for the generation of hydroxyl radicals. Influence of H2O2 and Fe2+ on the acetamiprid removal and TOC removal by Fenton process were investigated and it was found that removal increases with increase in H2O2 and Fe2+ concentration. At an initial concentration of 50 mg/L acetamiprid, 200 mg/L H2O2 and 20 mg/L Fe2+ at pH 3 was found to be optimum for acetamiprid removal. For UV treatment effect of pH was studied and it was found that pH has not much effect on the removal rate. Addition of H2O2 to UV process increased the removal rate because of the hydroxyl radical formation due to photolyis of H2O2. An H2O2 concentration of 110 mg/L at pH 6 was found to be optimum for acetamiprid removal. With photo-Fenton drastic reduction in the treatment time was observed with 10 times reduction in the amount of reagents required. H2O2 concentration of 20 mg/L and Fe2+ concentration of 2 mg/L was found to be optimum at pH 3. With TiO2 photocatalysis improvement in the removal rate was noticed compared to UV treatment. Effect of Cu and Fe doping on the photocatalytic activity under UV light was studied and it was observed that Cu doping enhanced the removal rate slightly while Fe doping has decreased the removal rate. Maximum acetamiprid removal was observed for an optimum catalyst loading of 1000 mg/L and Cu concentration of 1 wt%. It was noticed that mineralisation efficiency of the processes is low compared to acetamiprid removal efficiency. This may be due to the presence of stable intermediate compounds formed during degradation Kinetic studies were conducted for all the treatment processes and it was found that all processes follow pseudo-first order kinetics. Kinetic constants were found out from the experimental data for all the processes and half lives were calculated. The rate of reaction was in the order, photo- Fenton>UV/ H2O2>Fenton> TiO2 photocatalysis>UV. Operating cost was calculated for the processes and it was found that photo-Fenton removes the acetamiprid at lowest operating cost in lesser time. A kinetic model was developed for photo-Fenton process using the elementary reaction data and mass balance equations for the species involved in the process. Variation of acetamiprid concentration with time for different H2O2 and Fe2+ concentration at pH 3 can be found out using this model. The model was validated by comparing the simulated concentration profiles with that obtained from experiments. This study established the viability of the selected AOPs for the removal of acetamiprid from wastewater. Of the studied AOPs photo- Fenton gives the highest removal efficiency with lowest operating cost within shortest time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A promising technique for the large-scale manufacture of micro-fluidic devices and photonic devices is hot embossing of polymers such as PMMA. Micro-embossing is a deformation process where the workpiece material is heated to permit easier material flow and then forced over a planar patterned tool. While there has been considerable, attention paid to process feasibility very little effort has been put into production issues such as process capability and eventual process control. In this paper, we present initial studies aimed at identifying the origins and magnitude of variability for embossing features at the micron scale in PMMA. Test parts with features ranging from 3.5- 630 µm wide and 0.9 µm deep were formed. Measurements at this scale proved very difficult, and only atomic force microscopy was able to provide resolution sufficient to identify process variations. It was found that standard deviations of widths at the 3-4 µm scale were on the order of 0.5 µm leading to a coefficient of variation as high as 13%. Clearly, the transition from test to manufacturing for this process will require understanding the causes of this variation and devising control methods to minimize its magnitude over all types of parts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una de las actuaciones posibles para la gestión de los residuos sólidos urbanos es la valorización energética, es decir la incineración con recuperación de energía. Sin embargo es muy importante controlar adecuadamente el proceso de incineración para evitar en lo posible la liberación de sustancias contaminantes a la atmósfera que puedan ocasionar problemas de contaminación industrial.Conseguir que tanto el proceso de incineración como el tratamiento de los gases se realice en condiciones óptimas presupone tener un buen conocimiento de las dependencias entre las variables de proceso. Se precisan métodos adecuados de medida de las variables más importantes y tratar los valores medidos con modelos adecuados para transformarlos en magnitudes de mando. Un modelo clásico para el control parece poco prometedor en este caso debido a la complejidad de los procesos, la falta de descripción cuantitativa y la necesidad de hacer los cálculos en tiempo real. Esto sólo se puede conseguir con la ayuda de las modernas técnicas de proceso de datos y métodos informáticos, tales como el empleo de técnicas de simulación, modelos matemáticos, sistemas basados en el conocimiento e interfases inteligentes. En [Ono, 1989] se describe un sistema de control basado en la lógica difusa aplicado al campo de la incineración de residuos urbanos. En el centro de investigación FZK de Karslruhe se están desarrollando aplicaciones que combinan la lógica difusa con las redes neuronales [Jaeschke, Keller, 1994] para el control de la planta piloto de incineración de residuos TAMARA. En esta tesis se plantea la aplicación de un método de adquisición de conocimiento para el control de sistemas complejos inspirado en el comportamiento humano. Cuando nos encontramos ante una situación desconocida al principio no sabemos como actuar, salvo por la extrapolación de experiencias anteriores que puedan ser útiles. Aplicando procedimientos de prueba y error, refuerzo de hipótesis, etc., vamos adquiriendo y refinando el conocimiento, y elaborando un modelo mental. Podemos diseñar un método análogo, que pueda ser implementado en un sistema informático, mediante el empleo de técnicas de Inteligencia Artificial.Así, en un proceso complejo muchas veces disponemos de un conjunto de datos del proceso que a priori no nos dan información suficientemente estructurada para que nos sea útil. Para la adquisición de conocimiento pasamos por una serie de etapas: - Hacemos una primera selección de cuales son las variables que nos interesa conocer. - Estado del sistema. En primer lugar podemos empezar por aplicar técnicas de clasificación (aprendizaje no supervisado) para agrupar los datos y obtener una representación del estado de la planta. Es posible establecer una clasificación, pero normalmente casi todos los datos están en una sola clase, que corresponde a la operación normal. Hecho esto y para refinar el conocimiento utilizamos métodos estadísticos clásicos para buscar correlaciones entre variables (análisis de componentes principales) y así poder simplificar y reducir la lista de variables. - Análisis de las señales. Para analizar y clasificar las señales (por ejemplo la temperatura del horno) es posible utilizar métodos capaces de describir mejor el comportamiento no lineal del sistema, como las redes neuronales. Otro paso más consiste en establecer relaciones causales entre las variables. Para ello nos sirven de ayuda los modelos analíticos - Como resultado final del proceso se pasa al diseño del sistema basado en el conocimiento. El objetivo principal es aplicar el método al caso concreto del control de una planta de tratamiento de residuos sólidos urbanos por valorización energética. En primer lugar, en el capítulo 2 Los residuos sólidos urbanos, se trata el problema global de la gestión de los residuos, dando una visión general de las diferentes alternativas existentes, y de la situación nacional e internacional en la actualidad. Se analiza con mayor detalle la problemática de la incineración de los residuos, poniendo especial interés en aquellas características de los residuos que tienen mayor importancia de cara al proceso de combustión.En el capítulo 3, Descripción del proceso, se hace una descripción general del proceso de incineración y de los distintos elementos de una planta incineradora: desde la recepción y almacenamiento de los residuos, pasando por los distintos tipos de hornos y las exigencias de los códigos de buena práctica de combustión, el sistema de aire de combustión y el sistema de humos. Se presentan también los distintos sistemas de depuración de los gases de combustión, y finalmente el sistema de evacuación de cenizas y escorias.El capítulo 4, La planta de tratamiento de residuos sólidos urbanos de Girona, describe los principales sistemas de la planta incineradora de Girona: la alimentación de residuos, el tipo de horno, el sistema de recuperación de energía, y el sistema de depuración de los gases de combustión Se describe también el sistema de control, la operación, los datos de funcionamiento de la planta, la instrumentación y las variables que son de interés para el control del proceso de combustión.En el capítulo 5, Técnicas utilizadas, se proporciona una visión global de los sistemas basados en el conocimiento y de los sistemas expertos. Se explican las diferentes técnicas utilizadas: redes neuronales, sistemas de clasificación, modelos cualitativos, y sistemas expertos, ilustradas con algunos ejemplos de aplicación.Con respecto a los sistemas basados en el conocimiento se analizan en primer lugar las condiciones para su aplicabilidad, y las formas de representación del conocimiento. A continuación se describen las distintas formas de razonamiento: redes neuronales, sistemas expertos y lógica difusa, y se realiza una comparación entre ellas. Se presenta una aplicación de las redes neuronales al análisis de series temporales de temperatura.Se trata también la problemática del análisis de los datos de operación mediante técnicas estadísticas y el empleo de técnicas de clasificación. Otro apartado está dedicado a los distintos tipos de modelos, incluyendo una discusión de los modelos cualitativos.Se describe el sistema de diseño asistido por ordenador para el diseño de sistemas de supervisión CASSD que se utiliza en esta tesis, y las herramientas de análisis para obtener información cualitativa del comportamiento del proceso: Abstractores y ALCMEN. Se incluye un ejemplo de aplicación de estas técnicas para hallar las relaciones entre la temperatura y las acciones del operador. Finalmente se analizan las principales características de los sistemas expertos en general, y del sistema experto CEES 2.0 que también forma parte del sistema CASSD que se ha utilizado.El capítulo 6, Resultados, muestra los resultados obtenidos mediante la aplicación de las diferentes técnicas, redes neuronales, clasificación, el desarrollo de la modelización del proceso de combustión, y la generación de reglas. Dentro del apartado de análisis de datos se emplea una red neuronal para la clasificación de una señal de temperatura. También se describe la utilización del método LINNEO+ para la clasificación de los estados de operación de la planta.En el apartado dedicado a la modelización se desarrolla un modelo de combustión que sirve de base para analizar el comportamiento del horno en régimen estacionario y dinámico. Se define un parámetro, la superficie de llama, relacionado con la extensión del fuego en la parrilla. Mediante un modelo linealizado se analiza la respuesta dinámica del proceso de incineración. Luego se pasa a la definición de relaciones cualitativas entre las variables que se utilizan en la elaboración de un modelo cualitativo. A continuación se desarrolla un nuevo modelo cualitativo, tomando como base el modelo dinámico analítico.Finalmente se aborda el desarrollo de la base de conocimiento del sistema experto, mediante la generación de reglas En el capítulo 7, Sistema de control de una planta incineradora, se analizan los objetivos de un sistema de control de una planta incineradora, su diseño e implementación. Se describen los objetivos básicos del sistema de control de la combustión, su configuración y la implementación en Matlab/Simulink utilizando las distintas herramientas que se han desarrollado en el capítulo anterior.Por último para mostrar como pueden aplicarse los distintos métodos desarrollados en esta tesis se construye un sistema experto para mantener constante la temperatura del horno actuando sobre la alimentación de residuos.Finalmente en el capítulo Conclusiones, se presentan las conclusiones y resultados de esta tesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In conventional phylogeographic studies, historical demographic processes are elucidated from the geographical distribution of individuals represented on an inferred gene tree. However, the interpretation of gene trees in this context can be difficult as the same demographic/geographical process can randomly lead to multiple different genealogies. Likewise, the same gene trees can arise under different demographic models. This problem has led to the emergence of many statistical methods for making phylogeographic inferences. A popular phylogeographic approach based on nested clade analysis is challenged by the fact that a certain amount of the interpretation of the data is left to the subjective choices of the user, and it has been argued that the method performs poorly in simulation studies. More rigorous statistical methods based on coalescence theory have been developed. However, these methods may also be challenged by computational problems or poor model choice. In this review, we will describe the development of statistical methods in phylogeographic analysis, and discuss some of the challenges facing these methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In industrial practice, constrained steady state optimisation and predictive control are separate, albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. Assuming that the system settles to a steady state operating point under receding horizon control, conditions are given for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem. The method is based on adaptive linear state space models, which are obtained by using on-line identification techniques. The use of model adaptation is justified from a theoretical standpoint and its beneficial effects are shown in simulations. The method is tested with simulations of an industrial distillation column and a system of chemical reactors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to evaluate the presence of nutrients and toxic elements in coffees cultivated during the process of conversion, on organic agriculture, in southwest Bahia, Brazil. Levels of the nutrients and toxic elements were determined in samples of soils and coffee tissues from two transitional organic farms by atomic absorption spectrometry (FAAS). The metals in soil samples were extracted by Mehlich1 and USEPA-3050 procedures. Coffee samples from both farms presented relatively high levels of Cd, Zn and Cu (0.75,45.4 and 14.9 mu g g(-1). respectively), but were still below the limits specified by the Brazilian Food Legislation. The application of statistical methods showed that this finding can be attributed to the addition of high amounts of organic matter during the flowering tree period which can act on the bioavailability of metal ions in soils. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current work used discrete event simulation techniques to model the economics of quality within an actual automotive stamping plant. Automotive stamping is a complex, capital intensive process requiring part-specific tooling and specialised machinery. Quality control and quality improvement is difficult in the stamping environment due to the general lack of process understanding and the large number to interacting variables. These factors have prevented the widespread use of statistical process control. In this work, a model of the quality control techniques used at the Ford Geelong Stamping plant is developed and indirectly validated against results from production. To date, most discrete event models are of systems where the quality control process is clearly defined by the rules of statistical process control. However, the quality control technique used within the stamping plant is for the operator to perform a 100% visual inspection while unloading the finished panels. In the developed model, control is enacted after a cumulative count of defective items is observed, thereby approximating the operator who allows a number of defective panels to accumulate before resetting the line. Analysis of this model found that the cost sensitivity to inspection error is dependent upon the level of control and that the level of control determines line utilisation. Additional analysis of this model demonstrated that additional inspection processes would lead to more stable cost structures but these structures many not necessarily be lower cost. The model was subsequently applied to investigate the economics of quality improvement. The quality problem of panel blemishes, induced by slivers (small metal fragments), was chosen as a case stuffy. Errors of 20-30% were observed during direct validation of the cost model and it was concluded that the use of discrete event simulation models for applications requiring high accuracy would not be possible unless the production system was of low complexity. However, the model could be used to evaluate the sensitivity of input factors and investigating the effects of a number of potential improvement opportunities. Therefore, the research concluded that it is possible to use discrete event simulation to determine the quality economics of an actual stamping plant. However, limitations imposed by inability of the model to consider a number of external factors, such as continuous improvement, operator working conditions or wear and the lack of reliable quality data, result in low cost accuracy. Despite this, it still can be demonstrated that discrete event simulation has significant benefits over the alternate modelling methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides a unified and comprehensive treatment of the fuzzy neural networks as the intelligent controllers. This work has been motivated by a need to develop the solid control methodologies capable of coping with the complexity, the nonlinearity, the interactions, and the time variance of the processes under control. In addition, the dynamic behavior of such processes is strongly influenced by the disturbances and the noise, and such processes are characterized by a large degree of uncertainty. Therefore, it is important to integrate an intelligent component to increase the control system ability to extract the functional relationships from the process and to change such relationships to improve the control precision, that is, to display the learning and the reasoning abilities. The objective of this thesis was to develop a self-organizing learning controller for above processes by using a combination of the fuzzy logic and the neural networks. An on-line, direct fuzzy neural controller using the process input-output measurement data and the reference model with both structural and parameter tuning has been developed to fulfill the above objective. A number of practical issues were considered. This includes the dynamic construction of the controller in order to alleviate the bias/variance dilemma, the universal approximation property, and the requirements of the locality and the linearity in the parameters. Several important issues in the intelligent control were also considered such as the overall control scheme, the requirement of the persistency of excitation and the bounded learning rates of the controller for the overall closed loop stability. Other important issues considered in this thesis include the dependence of the generalization ability and the optimization methods on the data distribution, and the requirements for the on-line learning and the feedback structure of the controller. Fuzzy inference specific issues such as the influence of the choice of the defuzzification method, T-norm operator and the membership function on the overall performance of the controller were also discussed. In addition, the e-completeness requirement and the use of the fuzzy similarity measure were also investigated. Main emphasis of the thesis has been on the applications to the real-world problems such as the industrial process control. The applicability of the proposed method has been demonstrated through the empirical studies on several real-world control problems of industrial complexity. This includes the temperature and the number-average molecular weight control in the continuous stirred tank polymerization reactor, and the torsional vibration, the eccentricity, the hardness and the thickness control in the cold rolling mills. Compared to the traditional linear controllers and the dynamically constructed neural network, the proposed fuzzy neural controller shows the highest promise as an effective approach to such nonlinear multi-variable control problems with the strong influence of the disturbances and the noise on the dynamic process behavior. In addition, the applicability of the proposed method beyond the strictly control area has also been investigated, in particular to the data mining and the knowledge elicitation. When compared to the decision tree method and the pruned neural network method for the data mining, the proposed fuzzy neural network is able to achieve a comparable accuracy with a more compact set of rules. In addition, the performance of the proposed fuzzy neural network is much better for the classes with the low occurrences in the data set compared to the decision tree method. Thus, the proposed fuzzy neural network may be very useful in situations where the important information is contained in a small fraction of the available data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese avalia o impacto dos principais atores recorrentes durante o processo de IPO, em particular, o venture capitalist, o underwriter, e o auditor, sobre as condições de comercialização das ações da empresa, capturado pelo bid-ask spread, a fração de investidores institucionais que investem na empresa, a dispersão de capital, entre outros. Além disso, este estudo também analisa alguns benefícios que os fundos de Venture Capital (VCs) fornecem às empresas que eles investem. Ele investiga o papel dos VCs em dificultar o gerenciamento de resultados em IPOs e quantifica o papel desempenhado por eles no desempenho operacional das empresas após sua oferta inicial de ações. No primeiro capítulo, os resultados indicam que as empresas inflam seus resultados principalmente nos períodos pré-IPO e do IPO. Quando nós controlamos para os quatro períodos diferentes do IPO, observamos que IPOs de empresas investidas por VCs apresentam significativamente menos gerenciamento de resultados no IPO e em períodos seguintes à orfeta inicial das ações, exatamente quando as empresas tendem a inflar mais seus lucros. Este resultado é robusto a diferentes métodos estatísticos e diferentes metodologias usadas para avaliar o gerenciamento de resultados. Além disso, ao dividir a amostra entre IPOs de empresas investidas e não investidas por VCs, observa-se que ambos os grupos apresentam gerenciamento de resultados. Ambas as subamostras apresentam níveis de gerenciamento de resultados de forma mais intensa em diferentes fases ao redor do IPO. Finalmente, observamos também que top underwriters apresentam menores níveis de gerenciamento de resultados na subamostra das empresas investidas por VCs. No segundo capítulo, verificou-se que a escolha do auditor, dos VCs, e underwriter pode indicar escolhas de longo prazo da empresa. Nós apresentamos evidências que as características do underwriter, auditor, e VC têm um impacto sobre as características das empresas e seu desempenho no mercado. Além disso, estes efeitos são persistentes por quase uma década. As empresas que têm um top underwriter e um auditor big-N no momento do IPO têm características de mercado que permanecem ao longo dos próximos 8 anos. Essas características são representadas por um número maior de analistas seguindo a empresa, uma grande dispersão da propriedade através de investidores institucionais, e maior liquidez através um bid-ask spread menor. Elas também são menos propensas a saírem do mercado, bem como mais propensas à emissão de uma orferta secundária. Finalmente, empresas investidas por VCs são positivamente afetadas, quando consideramos todas as medidas de liquidez de mercado, desde a abertura de capital até quase uma década depois. Tais efeitos não são devido ao viés de sobrevivência. Estes resultados não dependem da bolha dot-com, ou seja, os nossos resultados são qualitativamente similares, uma vez que excluímos o período da bolha de 1999-2000. No último capítulo foi evidenciado que empresas investidas por VCs incorrem em um nível mais elevado de saldo em tesouraria do que as empresas não investidas. Este efeito é persistente por pelo menos 8 anos após o IPO. Mostramos também que empresas investidas por VCs estão associadas a um nível menor de alavancagem e cobertura de juros ao longo dos primeiros oito anos após o IPO. Finalmente, não temos evidências estatisticamente significantes entre VCs e a razão dividendo lucro. Estes resultados também são robustos a diversos métodos estatísticos e diferentes metodologias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nas últimas duas décadas, o crescimento do interesse pela metodologia Seis Sigma intensificou a aplicação da abordagem estatística e de outras abordagens quantitativas com o intuito de melhorar não apenas a qualidade de produtos, serviços e processos, como também aumentar o desempenho organizacional e o processo de tomada de decisão. Este artigo trata da aplicação da abordagem estatística no contexto da gestão da qualidade em indústrias de alimentos de médio e grande porte do Estado de São Paulo com o propósito de: identificar quais ferramentas e técnicas estatísticas são mais amplamente empregadas por indústrias do setor para garantir e controlar a qualidade; avaliar a interdependência entre o sucesso da implementação de programas de qualidade e segurança alimentar como Boas Práticas de Fabricação (BPF) e sistema de Análise de Perigos e Pontos Críticos de Controle (APPCC) e o uso de estatística; e analisar estimativas do grau de relevância do pensamento estatístico e de seus benefícios como ferramenta de melhoria da qualidade. Um survey exploratório-descritivo foi realizado e os resultados revelaram que a abordagem estatística começa a ser mais valorizada nas indústrias de alimentos pela relevância de seus benefícios assim como já ocorre em outros setores. Há evidências de que a implantação bem sucedida dos programas de segurança alimentar seja uma condição primordial para o uso efetivo de estatística e de outras abordagens quantitativas.