891 resultados para Process control - Statistical methods
Resumo:
This work is concerned with the development of techniques for the evaluation of large-scale highway schemes with particular reference to the assessment of their costs and benefits in the context of the current transport planning (T.P.P.) process. It has been carried out in close cooperation with West Midlands County Council, although its application and results are applicable elsewhere. The background to highway evaluation and its development in recent years has been described and the emergence of a number of deficiencies in current planning practise noted. One deficiency in particular stood out, that stemming from inadequate methods of scheme generation and the research has concentrated upon improving this stage of appraisal, to ensure that subsequent stages of design, assessment and implementation are based upon a consistent and responsive foundation. Deficiencies of scheme evaluation were found to stem from inadequate development of appraisal methodologies suffering from difficulties of valuation, measurement and aggregation of the disparate variables that characterise highway evaluation. A failure to respond to local policy priorities was also noted. A 'problem' rather than 'goals' based approach to scheme generation was taken, as it represented the current and foreseeable resource allocation context more realistically. A review of techniques with potential for highway problem based scheme generation, which would work within a series of practical and theoretical constraints were assessed and that of multivariate analysis, and classical factor analysis in particular, was selected, because it offerred considerable application to the difficulties of valuation, measurement and aggregation that existed. Computer programs were written to adapt classical factor analysis to the requirements of T.P.P. highway evaluation, using it to derive a limited number of factors which described the extensive quantity of highway problem data. From this, a series of composite problem scores for 1979 were derived for a case study area of south Birmingham, based upon the factorial solutions, and used to assess highway sites in terms of local policy issues. The methodology was assessed in the light of its ability to describe highway problems in both aggregate and disaggregate terms, to guide scheme design, coordinate with current scheme evaluation methods, and in general to improve upon current appraisal. Analysis of the results was both in subjective, 'common-sense' terms and using statistical methods to assess the changes in problem definition, distribution and priorities that emerged. Overall, the technique was found to improve upon current scheme generation methods in all respects and in particular in overcoming the problems of valuation, measurement and aggregation without recourse to unsubstantiated and questionable assumptions. A number of deficiencies which remained have been outlined and a series of research priorities described which need to be reviewed in the light of current and future evaluation needs.
Resumo:
The literature discusses several methods to control for self-selection effects but provides little guidance on which method to use in a setting with a limited number of variables. The authors theoretically compare and empirically assess the performance of different matching methods and instrumental variable and control function methods in this type of setting by investigating the effect of online banking on product usage. Hybrid matching in combination with the Gaussian kernel algorithm outperforms the other methods with respect to predictive validity. The empirical finding of large self-selection effects indicates the importance of controlling for these effects when assessing the effectiveness of marketing activities.
Resumo:
Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.
Resumo:
Signal processing is an important topic in technological research today. In the areas of nonlinear dynamics search, the endeavor to control or order chaos is an issue that has received increasing attention over the last few years. Increasing interest in neural networks composed of simple processing elements (neurons) has led to widespread use of such networks to control dynamic systems learning. This paper presents backpropagation-based neural network architecture that can be used as a controller to stabilize unsteady periodic orbits. It also presents a neural network-based method for transferring the dynamics among attractors, leading to more efficient system control. The procedure can be applied to every point of the basin, no matter how far away from the attractor they are. Finally, this paper shows how two mixed chaotic signals can be controlled using a backpropagation neural network as a filter to separate and control both signals at the same time. The neural network provides more effective control, overcoming the problems that arise with control feedback methods. Control is more effective because it can be applied to the system at any point, even if it is moving away from the target state, which prevents waiting times. Also control can be applied even if there is little information about the system and remains stable longer even in the presence of random dynamic noise.
Resumo:
Chaos control is a concept that recently acquiring more attention among the research community, concerning the fields of engineering, physics, chemistry, biology and mathematic. This paper presents a method to simultaneous control of deterministic chaos in several nonlinear dynamical systems. A radial basis function networks (RBFNs) has been used to control chaotic trajectories in the equilibrium points. Such neural network improves results, avoiding those problems that appear in other control methods, being also efficient dealing with a relatively small random dynamical noise.
Resumo:
In today’s modern manufacturing industry there is an increasing need to improve internal processes to meet diverse client needs. Process re-engineering is an important activity that is well understood by industry but its rate of application within small to medium size enterprises (SME) is less developed. Business pressures shift the focus of SMEs toward winning new projects and contracts rather than developing long-term, sustainable manufacturing processes. Variations in manufacturing processes are inevitable, but the amount of non-conformity often exceeds the acceptable levels. This paper is focused on the re-engineering of the manufacturing and verification procedure for discrete parts production with the aim of enhancing process control and product verification. The ideologies of the ‘Push’ and ‘Pull’ approaches to manufacturing are useful in the context of process re-engineering for data improvement. Currently information is pulled from the market and prominent customers, and manufacturing companies always try to make the right product, by following customer procedures that attempt to verify against specifications. This approach can result in significant quality control challenges. The aim of this paper is to highlight the importance of process re-engineering in product verification in SMEs. Leadership, culture, ownership and process management are among the main attributes required for the successful deployment of process re-engineering. This paper presents the findings from a case study showcasing the application of a modified re-engingeering method for the manufacturing and verification process. The findings from the case study indicate there are several advantages to implementing the re-engineering method outlined in this paper.
Resumo:
It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92D10, 92D30, 94A17, 62L10.
Resumo:
The development of new, health supporting food of high quality and the optimization of food technological processes today require the application of statistical methods of experimental design. The principles and steps of statistical planning and evaluation of experiments will be explained. By example of the development of a gluten-free rusk (zwieback), which is enriched by roughage compounds the application of a simplex-centroid mixture design will be shown. The results will be illustrated by different graphics.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92C20
Resumo:
Industrial activities like mining, electroplating and the oil extraction process, are increasing the levels of heavy metals such as Cu, Fe, Mg and Cd in aquatic ecosystems. This increase is related to the discharge of effluents containing trace of this elements above the maximum allowed by law. Methods such as ion exchange, membrane filtration and chemical precipitation have been studied as a means of treatment of these metals contamination. The precipitation of metals using anionic surfactants derived from carboxylic acids emerged as an alternative for the removal of metals from industrial effluents. The reaction between bivalent ions and these types of surfactants in aqueous solution leads to the formation of metal carboxylates, which can precipitate in the form of flakes and are subsequently removed by a process of decantation or simple filtration. In this work the metals extraction is performed by using the surfactant sodium hexadecanoate as extracting agent. The main purpose was to study the effect of temperature, solution pH, and concentration of surfactant in the metal removal process. The statistical design of the process showed that the process is directly dependent to changes in pH and concentration of surfactant, but inversely proportional and somewhat dependent to temperature variation, with the latter effect being considered negligible in most cases. The individual study of the effect of temperature showed a strong dependence of the process with the Kraft point, both for the surfactant used as extracting agent, as for the surfactant obtained after the reaction of this surfactant with the metal. From data of temperatures and concentrations of the surfactant was possible to calculate the equilibrium constant for the reaction between sodium hexadecanoate and copper ions. Later, thermodynamic parameters were determined, showing that the process is exothermic and spontaneous.
Resumo:
In 2004, the National Institutes of Health made available the Patient-Reported Outcomes Measurement Information System – PROMIS®, which is constituted of innovative item banks for health assessment. It is based on classical, reliable Patient-Reported Outcomes (PROs) and includes advanced statistical methods, such as Item Response Theory and Computerized Adaptive Test. One of PROMIS® Domain Frameworks is the Physical Function, whose item bank need to be translated and culturally adapted so it can be used in Portuguese speaking countries. This work aimed to translate and culturally adapt the PROMIS® Physical Function item bank into Portuguese. FACIT (Functional Assessment of Chronic Illness Therapy) translation methodology, which is constituted of eight stages for translation and cultural adaptation, was used. Fifty subjects above the age of 18 years participated in the pre-test (seventh stage). The questionnaire was answered by the participants (self-reported questionnaires) by using think aloud protocol, and cognitive and retrospective interviews. In FACIT methodology, adaptations can be done since the beginning of the translation and cultural adaption process, ensuring semantic, conceptual, cultural, and operational equivalences of the Physical Function Domain. During the pre-test, 24% of the subjects had difficulties understanding the items, 22% of the subjects suggested changes to improve understanding. The terms and concepts of the items were totally understood (100%) in 87% of the items. Only four items had less than 80% of understanding; for this reason, it was necessary to chance them so they could have correspondence with the original item and be understood by the subjects, after retesting. The process of translation and cultural adaptation of the PROMIS® Physical Function item bank into Portuguese was successful. This version of the assessment tool must have its psychometric properties validated before being made available for clinical use.
Resumo:
A modelação dos sistemas industriais apresenta para as organizações uma vantagem estratégica no domínio do estudo dos seus processos produtivos. Através da modelação será possível aumentar o conhecimento sobre os sistemas podendo permitir, quando possível, melhorias na gestão e planeamento da produção. Este conhecimento poderá permitir também um aumento da eficiência dos processos produtivos, através da melhoria ou eliminação das principais perdas detetadas no processo. Este trabalho tem como principal objetivo o desenvolvimento e validação de uma ferramenta de modelação, previsão e análise para sistemas produtivos industriais, tendo em vista o aumento do conhecimento sobre estes. Para a execução e desenvolvimento deste trabalho, foram utilizadas e desenvolvidas várias ferramentas, conceitos, metodologias e fundamentos teóricos conhecidos da bibliografia, como OEE (Overall Equipment Effectiveness), RdP (Redes de Petri), Séries Temporais, Kmeans, ou SPC (Statistical Process Control). A ferramenta de modelação, previsão e análise desenvolvida e descrita neste trabalho, mostrou-se capaz de auxiliar na deteção e interpretação das causas que influenciam os resultados do sistema produtivo e originam perdas, demonstrando as vantagens esperadas. Estes resultados foram baseados em dados reais de um sistema produtivo.
Resumo:
Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.
Resumo:
Under defined laboratory and field conditions, the investigation of percolating water through soil columns (podsol, lessive and peat) down to groundwater table shows that the main factors which control the chemical characteristics of the percolates are: precipitation, evaporation, infiltration rate, soil type, depth and dissolved organic substances. Evaporation and percolation velocity influences the Na+, SO4**2- and Cl- concentrations. Low percolation velocity leads also to longer percolation times and water logging in less permeable strata, which results in lower Eh-values and higher CO2-concentrations due to low gas exchange with the atmosphere. Ca2+ and Mg2+ carbonate concentration depends on soil type and depth. Metamorphism and decomposition of organic substances involve NO3 reduction and K+, Mg2+, SO4**2-, CO2, Fe2+,3+ transport. The analytical data were evaluated with multi variate statistical methods.