943 resultados para Quality models
Resumo:
Background. Insufficient and poor quality sleep among adolescents affects not only the cognitive functioning, but overall health of the individual. Existing research suggests that adolescents from varying ethnic groups exhibit differing sleep patterns. However, little research focuses on sleep patterns and associated factors (i.e. tobacco use, mental health indicators) among Hispanic youth. ^ Methods. The study population (n=2,536) included students in grades 9-12 who attended one of the three public high schools along the Texas-Mexico border in 2003. This was a cross sectional study using secondary data collected via a web-based, confidential, self-administered survey. Separate logistic regression models were estimated to identify factors associated with reduced (<9 hours/night) and poor quality sleep on average during weeknights. ^ Results. Of participants, 49.5% reported reduced sleep while 12.8% reported poor quality sleep. Factors significantly (p<0.05) associated with poor quality sleep were: often feeling stressed or anxious (OR=5.49), being born in Mexico (OR=0.65), using a computer/playing video games 15+ hours per week (OR=2.29), working (OR=1.37), being a current smoker (OR=2.16), and being a current alcohol user (OR=1.64). Factors significantly associated with reduced quantity of sleep were: often feeling stressed or anxious (OR=2.74), often having headaches/stomachaches (OR=1.77), being a current marijuana user (OR=1.70), being a current methamphetamine user (OR=4.92), and being a current alcohol user (OR=1.27). ^ Discussion. Previous research suggests that there are several factors that can influence sleep quality and quantity in adolescents. This paper discusses these factors (i.e. work, smoking, alcohol, etc.) found to be associated with poor sleep quality and reduced sleep quantity in the Hispanic adolescent population. A reduced quantity of sleep (81.20% of the participants) and a poor quality of sleep (12.80% of the participants) were also found in high school students from South Texas. ^
Resumo:
The purpose of the study was to describe regionalized systems of perinatal care serving predominantly low income Mexican-American women in rural underserved areas of Texas. The study focused upon ambulatory care; however, it provided a vehicle for examination of the health care system. The questions posed at the onset of the study included: (1) How well do regional organizations with various patterns of staffing and funding levels perform basic functions essential to ambulatory perinatal care? (2) Is there a relationship between the type of organization, its performance, and pregnancy outcome? (3) Are there specific recommendations which might improve an organization's future performance?^ A number of factors--including maldistribution of resources and providers, economic barriers, inadequate means of transportation, and physician resistance to transfer of patients between levels of care--have impeded the development of regionalized systems of perinatal health care, particularly in rural areas. However, studies have consistently emphasized the role of prenatal care in the early detection of risk and treatment of complications of pregnancy and childbirth, with subsequent improvement in pregnancy outcomes.^ This study has examined the "system" of perinatal care in rural areas, utilizing three basic regional models--preventive care, limited primary care, and fully primary care. Information documented in patient clinical records was utilized to compare the quality of ambulatory care provided in the three regional models.^ The study population included 390 women who received prenatal care in one of the seven study clinics. They were predominantly hispanic, married, of low income, with a high proportion of teenagers and women over 35. Twenty-eight percent of the women qualified as migrants.^ The major findings of the study are listed below: (1) Almost half of the women initiated care in the first trimester. (2) Three-fourths of the women had or exceeded the recommended number of prenatal visits. (3) There was a low rate of clinical problem recognition. Additional follow-up is needed to determine the reasons. (4) Cases with a tracer condition had significantly more visits with monitoring of the clinical condition. (5) Almost 90% of all referrals were completed. (6) Only 60% of mothers had postpartum follow-up, while almost 90% of their newborns received care. (7) The incidence of infants weighing 2500 grams or less was 4.2%. ^
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
The objectives of this dissertation were to evaluate health outcomes, quality improvement measures, and the long-term cost-effectiveness and impact on diabetes-related microvascular and macrovascular complications of a community health worker-led culturally tailored diabetes education and management intervention provided to uninsured Mexican Americans in an urban faith-based clinic. A prospective, randomized controlled repeated measures design was employed to compare the intervention effects between: (1) an intervention group (n=90) that participated in the Community Diabetes Education (CoDE) program along with usual medical care; and (2) a wait-listed comparison group (n=90) that received only usual medical care. Changes in hemoglobin A1c (HbA1c) and secondary outcomes (lipid status, blood pressure and body mass index) were assessed using linear mixed-models and an intention-to-treat approach. The CoDE group experienced greater reduction in HbA1c (-1.6%, p<.001) than the control group (-.9%, p<.001) over the 12 month study period. After adjusting for group-by-time interaction, antidiabetic medication use at baseline, changes made to the antidiabetic regime over the study period, duration of diabetes and baseline HbA1c, a statistically significant intervention effect on HbA1c (-.7%, p=.02) was observed for CoDE participants. Process and outcome quality measures were evaluated using multiple mixed-effects logistic regression models. Assessment of quality indicators revealed that the CoDE intervention group was significantly more likely to have received a dilated retinal examination than the control group, and 53% achieved a HbA1c below 7% compared with 38% of control group subjects. Long-term cost-effectiveness and impact on diabetes-related health outcomes were estimated through simulation modeling using the rigorously validated Archimedes Model. Over a 20 year time horizon, CoDE participants were forecasted to have less proliferative diabetic retinopathy, fewer foot ulcers, and reduced numbers of foot amputations than control group subjects who received usual medical care. An incremental cost-effectiveness ratio of $355 per quality-adjusted life-year gained was estimated for CoDE intervention participants over the same time period. The results from the three areas of program evaluation: impact on short-term health outcomes, quantification of improvement in quality of diabetes care, and projection of long-term cost-effectiveness and impact on diabetes-related health outcomes provide evidence that a community health worker can be a valuable resource to reduce diabetes disparities for uninsured Mexican Americans. This evidence supports formal integration of community health workers as members of the diabetes care team.^
Resumo:
The basic hedonic hypothesis is that goods are valued for their utility-bearing characteristics and not for the good itself. Each attribute can be evaluated by consumers when making a purchasing decision and an implicit price can be identified for each of them. Thus, the observed price of a certain good can be analyzed as the sum of the implicit prices paid for each quality attribute. Literature has reported hedonic models estimates in the case of wines, which are excellent examples of differentiated goods worldwide.The impact of different wine attributes (intrinsic or extrinsic) on consumers’ willingness to pay has been analyzed with dissimilar results. Wines coming from "New World" producers seem to be appreciated for different attributes than wines produced in the "Old World". Moreover, "Old and New World" consumers seem to value differently the wine’s characteristics. To our knowledge, no cross country analysis has been done dealing with "New World" wines in "Old World" countries, leaving an important gap in understanding underlying attributes influencing buying decisions.
Resumo:
The climatic conditions of mountain habitats are greatly influenced by topography. Large differences in microclimate occur with small changes in elevation, and this complex interaction is an important determinant of mountain plant distributions. In spite of this, elevation is not often considered as a relevant predictor in species distribution models (SDMs) for mountain plants. Here, we evaluated the importance of including elevation as a predictor in SDMs for mountain plant species. We generated two sets of SDMs for each of 73 plant species that occur in the Pacific Northwest of North America; one set of models included elevation as a predictor variable and the other set did not. AUC scores indicated that omitting elevation as a predictor resulted in a negligible reduction of model performance. However, further analysis revealed that the omission of elevation resulted in large over-predictions of species' niche breadths-this effect was most pronounced for species that occupy the highest elevations. In addition, the inclusion of elevation as a predictor constrained the effects of other predictors that superficially affected the outcome of the models generated without elevation. Our results demonstrate that the inclusion of elevation as a predictor variable improves the quality of SDMs for high-elevation plant species. Because of the negligible AUC score penalty for over-predicting niche breadth, our results support the notion that AUC scores alone should not be used as a measure of model quality. More generally, our results illustrate the importance of selecting biologically relevant predictor variables when constructing SDMs.
Resumo:
The source rock potential of Cretaceous organic rich whole rock samples from deep sea drilling project (DSDP) wells offshore southwestern Africa was investigated using bulk and quantitative pyrolysis techniques. The sample material was taken from organic rich intervals of Aptian, Albian and Turonian aged core samples from DSDP site 364 offshore Angola, DSDP well 530A north of the Walvis Ridge offshore Namibia, and DSDP well 361 offshore South Africa. The analytical program included TOC, Rock-Eval, pyrolysis GC, bulk kinetics and micro-scale sealed vessel pyrolysis (MSSV) experiments. The results were used to determine differences in the source rock petroleum type organofacies, petroleum composition, gas/oil ratio (GOR) and pressure-volume-temperature (PVT) behavior of hydrocarbons generated from these black shales for petroleum system modeling purposes. The investigated Aptian and Albian organic rich shales proved to contain excellent quality marine kerogens. The highest source rock potential was identified in sapropelic shales in DSDP well 364, containing very homogeneous Type II and organic sulfur rich Type IIS kerogen. They generate P-N-A low wax oils and low GOR sulfur rich oils, whereas Type III kerogen rich silty sandstones of DSDP well 361 show a potential for gas/condensate generation. Bulk kinetic experiments on these samples indicate that the organic sulfur contents influence kerogen transformation rates, Type IIS kerogen being the least stable. South of the Walvis Ridge, the Turonian contains predominantly a Type III kerogen. North of the Walvis Ridge, the Turonian black shales contain Type II kerogen and have the potential to generate P-N-A low and high wax oils, the latter with a high GOR at high maturity. Our results provide the first compositional kinetic description of Cretaceous organic rich black shales, and demonstrate the excellent source rock potential, especially of the Aptian-aged source rock, that has been recognized in a number of the South Atlantic offshore basins.
Resumo:
Sea surface temperatures and sea-ice extent are the most critical variables to evaluate the Southern Ocean paleoceanographic evolution in relation to the development of the global carbon cycle, atmospheric CO2 variability and ocean-atmosphere circulation. In contrast to the Atlantic and the Indian sectors, the Pacific sector of the Southern Ocean has been insufficiently investigated so far. To cover this gap of information we present diatom-based estimates of summer sea surface temperature (SSST) and winter sea-ice concentration (WSI) from 17 sites in the polar South Pacific to study the Last Glacial Maximum (LGM) at the EPILOG time slice (19,000-23,000 cal. years BP). Applied statistical methods are the Imbrie and Kipp Method (IKM) and the Modern Analog Technique (MAT) to estimate temperature and sea-ice concentration, respectively. Our data display a distinct LGM east-west differentiation in SSST and WSI with steeper latitudinal temperature gradients and a winter sea-ice edge located consistently north of the Pacific-Antarctic Ridge in the Ross sea sector. In the eastern sector of our study area, which is governed by the Amundsen Abyssal Plain, the estimates yield weaker latitudinal SSST gradients together with a variable extended winter sea-ice field. In this sector, sea-ice extent may have reached sporadically the area of the present Subantarctic Front at its maximum LGM expansion. This pattern points to topographic forcing as major controller of the frontal system location and sea-ice extent in the western Pacific sector whereas atmospheric conditions like the Southern Annular Mode and the ENSO affected the oceanographic conditions in the eastern Pacific sector. Although it is difficult to depict the location and the physical nature of frontal systems separating the glacial Southern Ocean water masses into different zones, we found a distinct temperature gradient in latitudes straddled by the modern Southern Subtropical Front. Considering that the glacial temperatures north of this zone are similar to the modern, we suggest that this represents the Glacial Southern Subtropical Front (GSSTF), which delimits the zone of strongest glacial SSST cooling (>4K) to its North. The southern boundary of the zone of maximum cooling is close to the glacial 4°C isotherm. This isotherm, which is in the range of SSST at the modern Antarctic Polar Front (APF), represents a circum-Antarctic feature and marks the northern edge of the glacial Antarctic Circumpolar Current (ACC). We also assume that a glacial front was established at the northern average winter sea ice edge, comparable with the modern Southern Antarctic Circumpolar Current Front (SACCF). During the glacial, this front would be located in the area of the modern APF. The northward deflection of colder than modern surface waters along the South American continent leads to a significant cooling of the glacial Humboldt Current surface waters (4-8K), which affects the temperature regimes as far north as into tropical latitudes. The glacial reduction of ACC temperatures may also result in the significant cooling in the Atlantic and Indian Southern Ocean, thus may enhance thermal differentiation of the Southern Ocean and Antarctic continental cooling. Comparison with temperature and sea ice simulations for the last glacial based on numerical simulations show that the majority of modern models overestimate summer and winter sea ice cover and that there exists few models that reproduce our temperature data rather well.
Resumo:
This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods
Resumo:
Nowadays, a wide offer of mobile augmented reality (mAR) applications is available at the market, and the user base of mobile AR-capable devices -smartphones- is rapidly increasing. Nevertheless, likewise to what happens in other mobile segments, business models to put mAR in value are not clearly defined yet. In this paper, we focus on sketching the big picture of the commercial offer of mAR applications, in order to inspire a posterior analysis of business models that may successfully support the evolution of mAR. We have gathered more than 400 mAR applications from Android Market, and analyzed the offer as a whole, taking into account some technology aspects, pricing schemes and user adoption factors. Results show, for example, that application providers are not expecting to generate revenues per direct download, although they are producing high-quality applications, well rated by the users.
Resumo:
The province of Salta is located the Northwest of Argentina in the border with Bolivia, Chile and Paraguay. Its Capital is the city of Salta that concentrates half of the inhabitants of the province and has grown to 600000 hab., from a small active Spanish town well founded in 1583. The city is crossed by the Arenales River descending from close mountains at North, source of water and end of sewers. But with actual growing it has become a focus of infection and of remarkable unhealthiness. It is necessary to undertake a plan for the recovery of the river, directed to the attainment of the well-being and to improve the life?s quality of the Community. The fundamental idea of the plan is to obtain an ordering of the river basin and an integral management of the channel and its surroundings, including the cleaning out. The improvement of the water?s quality, the healthiness of the surroundings and the improvement of the environment, must go hand by hand with the development of sport activities, of relaxation, tourism, establishment of breeding grounds, kitchen gardens, micro enterprises with clean production and other actions that contribute to their benefit by the society, that being a basic factor for their care and sustainable use. The present pollution is organic, chemical, industrial, domestic, due to the disposition of sweepings and sewer effluents that affects not only the flora and small fauna, destroying the biodiversity, but also to the health of people living in their margins. Within the plan it will be necessary to consider, besides hydric and environmental cleaning and the prevention of floods, the planning of the extraction of aggregates, the infrastructure and consolidation of margins works and the arrangement of all the river basin. It will be necessary to consider the public intervention at state, provincial and local level, and the private intervention. In the model it has been necessary to include the sub-model corresponding to the election of the entity to be the optimal instrument to reach the proposed objectives, giving an answer to the social, environmental and economic requirements. For that the authors have used multi-criteria decision methods to qualify and select alternatives, and for the programming of their implementation. In the model the authors have contemplated the short, average and long term actions. They conform a Paretooptimal alternative which secures the ordering, integral and suitable management of the basin of the Arenales River, focusing on its passage by the city of Salta.
Resumo:
Customer evolution and changes in consumers, determine the fact that the quality of the interface between marketing and sales may represent a true competitive advantage for the firm. Building on multidimensional theoretical and empirical models developed in Europe and on social network analysis, the organizational interface between the marketing and sales departments of a multinational high-growth company with operations in Argentina, Uruguay and Paraguay is studied. Both, attitudinal and social network measures of information exchange are used to make operational the nature and quality of the interface and its impact on performance. Results show the existence of a positive relationship of formalization, joint planning, teamwork, trust and information transfer on interface quality, as well as a positive relationship between interface quality and business performance. We conclude that efficient design and organizational management of the exchange network are essential for the successful performance of consumer goods companies that seek to develop distinctive capabilities to adapt to markets that experience vertiginous changes
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson’s Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson’s patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson’s disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.
Resumo:
There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.