961 resultados para Statistical Process Control


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The financial crisis that occurred between the years 2007 and 2008, known as the subprime crisis, has highlighted the governance of companies in Brazil and worldwide. To monitor the financial risk, quantitative tools of risk management were created in the 1990s, after several financial disasters. The market turmoil has also led companies to invest in the development and use of information, which are applied as tools to support process control and decision making. Numerous empirical studies on informational efficiency of the market have been made inside and outside Brazil, revealing whether the prices reflect the information available instantly. The creation of different levels of corporate governance on BOVESPA, in 2000, made the firms had greater impairment in relation to its shareholders with greater transparency in their information. The purpose of this study is to analyze how the subprime financial crisis has affected, between January 2007 and December 2009, the volatility of stock returns in the BM&BOVESPA of companies with greater liquidity at different levels of corporate governance. From studies of time series and through the studies of events, econometric tests were performed by the EVIEWS, and through the results obtained it became evident that the adoption of good practices of corporate governance affect the volatility of returns of companies

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Titanium nitride films were grown on glass using the Cathodic Cage Plasma Deposition technique in order to verify the influence of process parameters in optical and structural properties of the films. The plasma atmosphere used was a mixture of Ar, N2 and H2, setting the Ar and N2 gas flows at 4 and 3 sccm, respectively and H2 gas flow varied from 0, 1 to 2 sccm. The deposition process was monitored by Optical Emission Spectroscopy (OES) to investigate the influence of the active species in plasma. It was observed that increasing the H2 gas flow into the plasma the luminescent intensities associated to the species changed. In this case, the luminescence of N2 (391,4nm) species was not proportional to the increasing of the H2 gas into the reactor. Other parameters investigated were diameter and number of holes in the cage. The analysis by Grazing Incidence X-Ray Diffraction (GIXRD) confirmed that the obtained films are composed by TiN and they may have variations in the nitrogen amount into the crystal and in the crystallite size. The optical microscopy images provided information about the homogeneity of the films. The atomic force microscopy (AFM) results revealed some microstructural characteristics and surface roughness. The thickness was measured by ellipsometry. The optical properties such as transmittance and reflectance (they were measured by spectrophotometry) are very sensitive to changes in the crystal lattice of the material, chemical composition and film thicknesses. Therefore, such properties are appropriate tools for verification of this process control. In general, films obtained at 0 sccm of H2 gas flow present a higher transmittance. It can be attributed to the smaller crystalline size due to a higher amount of nitrogen in the TiN lattice. The films obtained at 1 and 2 sccm of H2 gas flow have a golden appearance and XRD pattern showed peaks characteristics of TiN with higher intensity and smaller FWHM (Full Width at Half Maximum) parameter. It suggests that the hydrogen presence in the plasma makes the films more stoichiometric and becomes it more crystalline. It was observed that with higher number of holes in the lid of the cage, close to the region between the lid and the sample and the smaller diameter of the hole, the deposited film is thicker, which is justified by the most probability of plasma species reach effectively the sample and it promotes the growth of the film

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the present study we elaborated algorithms by using concepts from percolation theory which analyze the connectivity conditions in geological models of petroleum reservoirs. From the petrophysical parameters such as permeability, porosity, transmittivity and others, which may be generated by any statistical process, it is possible to determine the portion of the model with more connected cells, what the interconnected wells are, and the critical path between injector and source wells. This allows to classify the reservoir according to the modeled petrophysical parameters. This also make it possible to determine the percentage of the reservoir to which each well is connected. Generally, the connected regions and the respective minima and/or maxima in the occurrence of the petrophysical parameters studied constitute a good manner to characterize a reservoir volumetrically. Therefore, the algorithms allow to optimize the positioning of wells, offering a preview of the general conditions of the given model s connectivity. The intent is not to evaluate geological models, but to show how to interpret the deposits, how their petrophysical characteristics are spatially distributed, and how the connections between the several parts of the system are resolved, showing their critical paths and backbones. The execution of these algorithms allows us to know the properties of the model s connectivity before the work on reservoir flux simulation is started

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The petrochemical industry has as objective obtain, from crude oil, some products with a higher commercial value and a bigger industrial utility for energy purposes. These industrial processes are complex, commonly operating with large production volume and in restricted operation conditions. The operation control in optimized and stable conditions is important to keep obtained products quality and the industrial plant safety. Currently, industrial network has been attained evidence when there is a need to make the process control in a distributed way. The Foundation Fieldbus protocol for industrial network, for its interoperability feature and its user interface organized in simple configuration blocks, has great notoriety among industrial automation network group. This present work puts together some benefits brought by industrial network technology to petrochemical industrial processes inherent complexity. For this, a dynamic reconfiguration system for intelligent strategies (artificial neural networks, for example) based on the protocol user application layer is proposed which might allow different applications use in a particular process, without operators intervention and with necessary guarantees for the proper plant functioning

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of atmospheric pressure plasmas for thin film deposition on thermo-sensitive materials is currently one of the main challenges of the plasma scientific community. Despite the growing interest in this field, the existing knowledge gap between gas-phase reaction mechanisms and thin film properties is still one of the most important barriers to overcome for a complete understanding of the process. In this work, thin films surface characterization techniques, combined with passive and active gas-phase diagnostic methods, were used to provide a comprehensive study of the Ar/TEOS deposition process assisted by an atmospheric pressure plasma jet. SiO2-based thin films exhibiting a well-defined chemistry, a good morphological structure and high uniformity were studied in detail by FTIR, XPS, AFM and SEM analysis. Furthermore, non-intrusive spectroscopy techniques (OES, filter imaging) and laser spectroscopic methods (Rayleigh scattering, LIF and TALIF) were employed to shed light on the complexity of gas-phase mechanisms involved in the deposition process and discuss the influence of TEOS admixture on gas temperature, electron density and spatial-temporal behaviours of active species. The poly-diagnostic approach proposed in this work opens interesting perspectives both in terms of process control and optimization of thin film performances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Laser Cladding (LC) is an emerging technology which is used both for coating applications as well as near-net shape fabrication. Despite its significant advantages, such as low dilution and metallurgical bond with the substrate, it still faces issues such as process control and repeatability, which restricts the extension to its applications. The following thesis evaluates the LC technology and tests its potential to be applied to reduce particulate matter emissions from the automotive and locomotive sector. The evaluation of LC technology was carried out for the deposition of multi-layer and multi-track coatings. 316L stainless steel coatings were deposited to study the minimisation of geometric distortions in thin-walled samples. Laser power, as well as scan strategy, were the main variables to achieve this goal. The use of constant power, reduction at successive layers, a control loop control system, and two different scan strategies were studied. The closed-loop control system was found to be practical only when coupled with the correct scan strategy for the deposition of thin walls. Three overlapped layers of aluminium bronze were deposited onto a structural steel pipe for multitrack coatings. The effect of laser power, scan speed and hatch distance on the final geometry of coating were studied independently, and a combined parameter was established to effectively control each geometrical characteristic (clad width, clad height and percentage of dilution). LC was then applied to coat commercial GCI brake discs with tool steel. The optical micrography showed that even with preheating, the cracks that originated from the substrate towards the coating were still present. The commercial brake discs emitted airborne particles whose concentration and size depended on the test conditions used for simulation in the laboratory. The contact of LC cladded wheel with rail emitted significantly less ultra-fine particles while maintaining the acceptable values of coefficient of friction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The current study used statistical methods of quality control to evaluate the performance of a sewage treatment station. The concerned station is located in Cascavel city, Paraná State. The evaluated parameters were hydrogenionic potential, settleable solids, total suspended solids, chemical oxygen demand and biochemical oxygen demand in five days. Statistical analysis was performed through Shewhart control charts and process capability ratio. According to Shewhart charts, only the BOD(5.20) variable was under statistical control. Through capability ratios, we observed that except for pH the sewage treatment station is not capable to produce effluents under characteristics that fulfill specifications or standard launching required by environmental legislation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.