937 resultados para Data quality control
Resumo:
Masks are widely used in different industries, for example, traditional metal industry, hospitals or semiconductor industry. Quality is a critical issue in mask industry as it is related to public health and safety. Traditional quality practices for manufacturing process have some limitations in implementing them in mask industries. This paper aims to investigate the suitability of Six Sigma quality control method for the manufacturing process in the mask industry to provide high quality products, enhancing the process capacity, reducing the defects and the returned goods arising in a selected mask manufacturing company. This paper suggests that modifications necessary in Six Sigma method for effective implementation in mask industry.
Resumo:
Several authors stress the importance of data’s crucial foundation for operational, tactical and strategic decisions (e.g., Redman 1998, Tee et al. 2007). Data provides the basis for decision making as data collection and processing is typically associated with reducing uncertainty in order to make more effective decisions (Daft and Lengel 1986). While the first series of investments of Information Systems/Information Technology (IS/IT) into organizations improved data collection, restricted computational capacity and limited processing power created challenges (Simon 1960). Fifty years on, capacity and processing problems are increasingly less relevant; in fact, the opposite exists. Determining data relevance and usefulness is complicated by increased data capture and storage capacity, as well as continual improvements in information processing capability. As the IT landscape changes, businesses are inundated with ever-increasing volumes of data from both internal and external sources available on both an ad-hoc and real-time basis. More data, however, does not necessarily translate into more effective and efficient organizations, nor does it increase the likelihood of better or timelier decisions. This raises questions about what data managers require to assist their decision making processes.
Resumo:
The National Road Safety Strategy 2011-2020 outlines plans to reduce the burden of road trauma via improvements and interventions relating to safe roads, safe speeds, safe vehicles, and safe people. It also highlights that a key aspect in achieving these goals is the availability of comprehensive data on the issue. The use of data is essential so that more in-depth epidemiologic studies of risk can be conducted as well as to allow effective evaluation of road safety interventions and programs. Before utilising data to evaluate the efficacy of prevention programs it is important for a systematic evaluation of the quality of underlying data sources to be undertaken to ensure any trends which are identified reflect true estimates rather than spurious data effects. However, there has been little scientific work specifically focused on establishing core data quality characteristics pertinent to the road safety field and limited work undertaken to develop methods for evaluating data sources according to these core characteristics. There are a variety of data sources in which traffic-related incidents and resulting injuries are recorded, which are collected for a variety of defined purposes. These include police reports, transport safety databases, emergency department data, hospital morbidity data and mortality data to name a few. However, as these data are collected for specific purposes, each of these data sources suffers from some limitations when seeking to gain a complete picture of the problem. Limitations of current data sources include: delays in data being available, lack of accurate and/or specific location information, and an underreporting of crashes involving particular road user groups such as cyclists. This paper proposes core data quality characteristics that could be used to systematically assess road crash data sources to provide a standardised approach for evaluating data quality in the road safety field. The potential for data linkage to qualitatively and quantitatively improve the quality and comprehensiveness of road crash data is also discussed.
Resumo:
Control of biospecimen quality that is linked to processing is one of the goals of biospecimen science. Consensus is lacking, however, regarding optimal sample quality-control (QC) tools (ie, markers and assays). The aim of this review was to identify QC tools, both for fluid and solid-tissue samples, based on a comprehensive and critical literature review. The most readily applicable tools are those with a known threshold for the preanalytical variation and a known reference range for the QC analyte. Only a few meaningful markers were identified that meet these criteria, such as CD40L for assessing serum exposure at high temperatures and VEGF for assessing serum freeze-thawing. To fully assess biospecimen quality, multiple QC markers are needed. Here we present the most promising biospecimen QC tools that were identified.
Resumo:
Under the concept of Total Quality Control, based on their experience, the authors discussed potential demand for quality of immunization services and possible solutions to these demands. Abstract in Chinese 全面质量管理(total quality control,TQC)是在20世纪60年代由美国人V,Feigonbaum和J.unan先后提出的新的质量管理观念,众所周知的ISO9000族标准即建立在TQC理念下的质量管理标准,该标准已成为当今世界全球一致、最具权威的质量管理和质量保证的国际规则[1-2].21世纪是质量世纪,推行TQC,不断改进产品和服务质量,目前已成为我国各行各业在不断激烈的市场竞争下完善自我、保证生存和发展的重要手段.实施预防接种是预防和控制传染病,保护人群健康的重要措施,预防接种工作中,产品即预防接种服务,需方(顾客)为接受预防接种服务的广大人群,是产品的消费者.随社会的迅速发展,人们对健康需求的不断提高,对预防接种工作也提出了更高的质量要求.本文对TQC模式下顾客对预防接种服务的质量要求进行了综合分析,并对如何改进服务质量进行了初步探讨.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
While data quality has been identified as a critical factor associated with enterprise resource planning (ERP) failure, the relationship between ERP stakeholders, the information they require and its relationship to ERP outcomes continues to be poorly understood. Applying stakeholder theory to the problem of ERP performance, we put forward a framework articulating the fundamental differences in the way users differentiate between ERP data quality and utility. We argue that the failure of ERPs to produce significant organisational outcomes can be attributed to conflict between stakeholder groups over whether the data contained within an ERP is of adequate ‘quality’. The framework provides guidance as how to manage data flows between stakeholders, offering insight into each of their specific data requirements. The framework provides support for the idea that stakeholder affiliation dictates the assumptions and core values held by individuals, driving their data needs and their perceptions of data quality and utility.
Resumo:
Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Tower platforms, with instrumentation at six levels above the surface to a height of 30 m, were used to record various atmospheric parameters in the surface layer. Sensors for measuring both mean and fluctuating quantities were used, with the majority of them indigenously built. Soil temperature sensors up to a depth of 30 cm from the surface were among the variables connected to the mean data logger. A PC-based data acquisition system built at the Centre for Atmospheric Sciences, IISc, was used to acquire the data from fast response sensors. This paper reports the various components of a typical MONTBLEX tower observatory and describes the actual experiments carried out in the surface layer at four sites over the monsoon trough region as a part of the MONTBLEX programme. It also describes and discusses several checks made on randomly selected tower data-sets acquired during the experiment. Checks made include visual inspection of time traces from various sensors, comparative plots of sensors measuring the same variable, wind and temperature profile plots calculation of roughness lengths, statistical and stability parameters, diurnal variation of stability parameters, and plots of probability density and energy spectrum for the different sensors. Results from these checks are found to be very encouraging and reveal the potential for further detailed analysis to understand more about surface layer characteristics.
Resumo:
A modeling framework is presented in this paper, integrating hydrologic scenarios projected from a General Circulation Model (GCM) with a water quality simulation model to quantify the future expected risk. Statistical downscaling with a Canonical Correlation Analysis (CCA) is carried out to develop the future scenarios of hydro-climate variables starting with simulations provided by a GCM. A Multiple Logistic Regression (MLR) is used to quantify the risk of Low Water Quality (LWQ) corresponding to a threshold quality level, by considering the streamflow and water temperature as explanatory variables. An Imprecise Fuzzy Waste Load Allocation Model (IFWLAM) presented in an earlier study is then used to develop adaptive policies to address the projected water quality risks. Application of the proposed methodology is demonstrated with the case study of Tunga-Bhadra river in India. The results showed that the projected changes in the hydro-climate variables tend to diminish DO levels, thus increasing the future risk levels of LWQ. (C) 2012 Elsevier B.V. All rights reserved.