937 resultados para Data quality control
Resumo:
It has long been said that market itself is the ideal regulator of all evils that may come up among traders. Free and fair competition among manufacturers in the market will adequately ensure a fair dealing to the consumers. However, these are pious hopes. that markets anywhere in the world could not accomplish so far. Consumers are being sought to be lured by advertisements issued by manufacturers and sellers that are found often false and misleading. Untrue statements and claims about quality and performance of the products virtually deceive them. The plight of the consumers remains as an unheard cry in the wildemess. In this sorry state of affairs, it is quite natural that the consumers look to the governments for a helping hand. It is seen that the governmental endeavours to ensure quality in goods are diversified. Different tools are formulated and put to use, depending upon the requirements necessitated by the facts and circumstances. This thesis is an enquiry into these measures
Resumo:
This article models the interactions between safety and quality control and stage of distribution in the food marketing complex
Resumo:
Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.
Resumo:
The quality control, validation and verification of the European Flood Alert System (EFAS) are described. EFAS is designed as a flood early warning system at pan-European scale, to complement national systems and provide flood warnings more than 2 days before a flood. On average 20–30 alerts per year are sent out to the EFAS partner network which consists of 24 National hydrological authorities responsible for transnational river basins. Quality control of the system includes the evaluation of the hits, misses and false alarms, showing that EFAS has more than 50% of the time hits. Furthermore, the skills of both the meteorological as well as the hydrological forecasts are evaluated, and are included here for a 10-year period. Next, end-user needs and feedback are systematically analysed. Suggested improvements, such as real-time river discharge updating, are currently implemented.
Resumo:
Geospatial information of many kinds, from topographic maps to scientific data, is increasingly being made available through web mapping services. These allow georeferenced map images to be served from data stores and displayed in websites and geographic information systems, where they can be integrated with other geographic information. The Open Geospatial Consortium’s Web Map Service (WMS) standard has been widely adopted in diverse communities for sharing data in this way. However, current services typically provide little or no information about the quality or accuracy of the data they serve. In this paper we will describe the design and implementation of a new “quality-enabled” profile of WMS, which we call “WMS-Q”. This describes how information about data quality can be transmitted to the user through WMS. Such information can exist at many levels, from entire datasets to individual measurements, and includes the many different ways in which data uncertainty can be expressed. We also describe proposed extensions to the Symbology Encoding specification, which include provision for visualizing uncertainty in raster data in a number of different ways, including contours, shading and bivariate colour maps. We shall also describe new open-source implementations of the new specifications, which include both clients and servers.
Resumo:
Amazonian oils and fats display unique triacylglycerol (TAG) profiles and, because of their economic importance as renewable raw materials and use by the cosmetic and food industries, are often subject to adulteration and forgery. Representative samples of these oils (andiroba, Brazil nut, buriti, and passion fruit) and fats (cupuacu, murumuru, and ucuba) were characterized without pre-separation or derivatization via dry (solvent-free) matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Characteristic profiles of TAG were obtained for each oil and tat. Dry MALDI-TOF MS provides typification and direct and detailed information, via TAG profiles, of their variable combinations of fatty acids. A database from spectra could be developed and may be used for their fast and reliable typification, application screening, and quality control.
Resumo:
To have good data quality with high complexity is often seen to be important. Intuition says that the higher accuracy and complexity the data have the better the analytic solutions becomes if it is possible to handle the increasing computing time. However, for most of the practical computational problems, high complexity data means that computational times become too long or that heuristics used to solve the problem have difficulties to reach good solutions. This is even further stressed when the size of the combinatorial problem increases. Consequently, we often need a simplified data to deal with complex combinatorial problems. In this study we stress the question of how the complexity and accuracy in a network affect the quality of the heuristic solutions for different sizes of the combinatorial problem. We evaluate this question by applying the commonly used p-median model, which is used to find optimal locations in a network of p supply points that serve n demand points. To evaluate this, we vary both the accuracy (the number of nodes) of the network and the size of the combinatorial problem (p). The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 supply points we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000 (which is aggregated from the 1.5 million nodes). To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when the accuracy in the road network increase and the combinatorial problem (low p) is simple. When the combinatorial problem is complex (large p) the improvements of increasing the accuracy in the road network are much larger. The results also show that choice of the best accuracy of the network depends on the complexity of the combinatorial (varying p) problem.
Resumo:
Trata-se de uma análise teórica sobre os conceitos modernos de comtrole de qualidade - CQ, refletidos no TOTAL QUALITY CONTROL - TQC - Controle Total de Qualidade, aplicados às áreas de serviços das empresas industriais e às empresas fornecedoras de serviços