906 resultados para Process capability analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this note we examine using Genzbretz and Miwa algorithms to improve estimation of proposition of non-conformance in multivariate normal distributions. This estimation is required in the procedure outlined in Abbasi and Niaki (Int J Adv Manuf Technol 50(5-8):823-830, 2010) to determine process capability index of multivariate non-normal processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When the distribution of a process characterized by a profile is non normal, process capability analysis using normal assumption often leads to erroneous interpretations of the process performance. Profile monitoring is a relatively new set of techniques in quality control that is used in situations where the state of product or process is represented by a function of two or more quality characteristics. Such profiles can be modeled using linear or nonlinear regression models. In some applications, it is assumed that the quality characteristics follow a normal distribution; however, in certain applications this assumption may fail to hold and may yield misleading results. In this article, we consider process capability analysis of non normal linear profiles. We investigate and compare five methods to estimate non normal process capability index (PCI) in profiles. In three of the methods, an estimation of the cumulative distribution function (cdf) of the process is required to analyze process capability in profiles. In order to estimate cdf of the process, we use a Burr XII distribution as well as empirical distributions. However, the resulted PCI with estimating cdf of the process is sometimes far from its true value. So, here we apply artificial neural network with supervised learning which allows the estimation of PCIs in profiles without the need to estimate cdf of the process. Box-Cox transformation technique is also developed to deal with non normal situations. Finally, a comparison study is performed through the simulation of Gamma, Weibull, Lognormal, Beta and student-t data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: In profile monitoring, which is a growing research area in the field of statistical process control, the relationship between response and explanatory variables is monitored over time. The purpose of this paper is to focus on the process capability analysis of linear profiles. Process capability indices give a quick indication of the capability of a manufacturing process. Design/methodology/approach: In this paper, the proportion of the non-conformance criteria is employed to estimate process capability index. The paper has considered the cases where specification limits is constant or is a function of explanatory variable X. Moreover, cases where both equal and random design schemes in profile data acquisition is required (as the explanatory variable) is considered. Profiles with the assumption of deterministic design points are usually used in the calibration applications. However, there are other applications where design points within a profile would be i.i.d. random variables from a given distribution. Findings: Simulation studies using simple linear profile processes for both fixed and random explanatory variable with constant and functional specification limits are considered to assess the efficacy of the proposed method. Originality/value: There are many cases in industries such as semiconductor industries where quality characteristics are in form of profiles. There is no method in the literature to analyze process capability for theses processes, however recently quite a few methods have been presented in monitoring profiles. Proposed methods provide a framework for quality engineers and production engineers to evaluate and analyze capability of the profile processes. © Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new multivariate process capability index (MPCI) which is based on the principal component analysis (PCA) and is dependent on a parameter (Formula presented.) which can take on any real number. This MPCI generalises some existing multivariate indices based on PCA proposed by several authors when (Formula presented.) or (Formula presented.). One of the key contributions of this paper is to show that there is a direct correspondence between this MPCI and process yield for a unique value of (Formula presented.). This result is used to establish a relationship between the capability status of the process and to show that under some mild conditions, the estimators of this MPCI is consistent and converge to a normal distribution. This is then applied to perform tests of statistical hypotheses and in determining sample sizes. Several numerical examples are presented with the objective of illustrating the procedures and demonstrating how they can be applied to determine the viability and capacity of different manufacturing processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the idea of a compendium of process technologies, i.e., a concise but comprehensive collection of techniques for process model analysis that support research on the design, execution, and evaluation of processes. The idea originated from observations on the evolution of process-related research disciplines. Based on these observations, we derive design goals for a compendium. Then, we present the jBPT library, which addresses these goals by means of an implementation of common analysis techniques in an open source codebase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. In order to judge on compliance of the business processing, the degree of behavioural deviation of a case, i.e., an observed execution sequence, is quantified with respect to a process model (referred to as fitness, or recall). Recently, different compliance measures have been proposed. Still, nearly all of them are grounded on state-based techniques and the trace equivalence criterion, in particular. As a consequence, these approaches have to deal with the state explosion problem. In this paper, we argue that a behavioural abstraction may be leveraged to measure the compliance of a process log – a collection of cases. To this end, we utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently. We propose different compliance measures based on these profiles, discuss the impact of noise in process logs on our measures, and show how diagnostic information on non-compliance is derived. As a validation, we report on findings of applying our approach in a case study with an international service provider.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Welding parameters like welding speed, rotation speed, plunge depth, shoulder diameter etc., influence the weld zone properties, microstructure of friction stir welds, and forming behavior of welded sheets in a synergistic fashion. The main aims of the present work are to (1) analyze the effect of welding speed, rotation speed, plunge depth, and shoulder diameter on the formation of internal defects during friction stir welding (FSW), (2) study the effect on axial force and torque during welding, (c) optimize the welding parameters for producing internal defect-free welds, and (d) propose and validate a simple criterion to identify defect-free weld formation. The base material used for FSW throughout the work is Al 6061T6 having a thickness value of 2.1 mm. Only butt welding of sheets is aimed in the present work. It is observed from the present analysis that higher welding speed, higher rotation speed, and higher plunge depth are preferred for producing a weld without internal defects. All the shoulder diameters used for FSW in the present work produced defect-free welds. The axial force and torque are not constant and a large variation is seen with respect to FSW parameters that produced defective welds. In the case of defect-free weld formation, the axial force and torque are relatively constant. A simple criterion, (a,tau/a,p)(defective) > (a,tau/a,p)(defect free) and (a,F/a,p)(defective) > (a,F/a,p)(defect free), is proposed with this observation for identifying the onset of defect-free weld formation. Here F is axial force, tau is torque, and p is welding speed or tool rotation speed or plunge depth. The same criterion is validated with respect to Al 5xxx base material. Even in this case, the axial force and torque remained constant while producing defect-free welds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Department of Marine Geology and Geophysics,Cochin University of Science and Technology

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

About 90% of the anthropogenic increase in heat stored in the climate system is found the oceans. Therefore it is relevant to understand the details of ocean heat uptake. Here we present a detailed, process-based analysis of ocean heat uptake (OHU) processes in HiGEM1.2, an atmosphere-ocean general circulation model (AOGCM) with an eddy-permitting ocean component of 1/3 degree resolution. Similarly to various other models, HiGEM1.2 shows that the global heat budget is dominated by a downward advection of heat compensated by upward isopycnal diffusion. Only in the upper tropical ocean do we find the classical balance between downward diapycnal diffusion and upward advection of heat. The upward isopycnal diffusion of heat is located mostly in the Southern Ocean, which thus dominates the global heat budget. We compare the responses to a 4xCO2 forcing and an enhancement of the windstress forcing in the Southern Ocean. This highlights the importance of regional processes for the global ocean heat uptake. These are mainly surface fluxes and convection in the high latitudes, and advection in the Southern Ocean mid-latitudes. Changes in diffusion are less important. In line with the CMIP5 models, HiGEM1.2 shows a band of strong OHU in the mid-latitude Southern Ocean in the 4xCO2 run, which is mostly advective. By contrast, in the high-latitude Southern Ocean regions it is the suppression of convection that leads to OHU. In the enhanced windstress run, convection is strengthened at high Southern latitudes, leading to heat loss, while the magnitude of the OHU in the Southern mid-latitudes is very similar to the 4xCO2 results. Remarkably, there is only very small global OHU in the enhanced windstress run. The wind stress forcing just leads to a redistribution of heat. We relate the ocean changes at high southern latitudes to the effect of climate change on the Antarctic Circumpolar Current (ACC). It weakens in the 4xCO2 run and strengthens in the wind stress run. The weakening is due to a narrowing of the ACC, caused by an expansion of the Weddell Gyre, and a flattening of the isopycnals, which are explained by a combination of the wind stress forcing and increased precipitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Construction of a confidence interval for process capability index CPM is often based on a normal approximation with fixed sample size. In this article, we describe a different approach in constructing a fixed-width confidence interval for process capability index CPM with a preassigned accuracy by using a combination of bootstrap and sequential sampling schemes. The optimal sample size required to achieve a preassigned confidence level is obtained using both two-stage and modified two-stage sequential procedures. The procedure developed is also validated using an extensive simulation study.