943 resultados para Manufacturing processes parameters
Resumo:
Significant advances have been made in the last decade to quantify the process of wet granulation. The attributes of product granules from the granulation process are controlled by a combination of three groups of processes occurring in the granulator: (1) wetting and nucleation, (2) growth and consolidation and (3) breakage and attrition. For the first two of these processes, the key controlling dimensionless groups are defined and regime maps are presented and validated with data from tumbling and mixer granulators. Granulation is an example of particle design. For quantitative analysis, both careful characterisation of the feed formulation and knowledge of operating parameters are required. A key thesis of this paper is that the design, scaleup and operation of granulation processes can now be considered as quantitative engineering rather than a black art. Résumé
Resumo:
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.
Resumo:
We consider the problem of assigning an input vector bfx to one of m classes by predicting P(c|bfx) for c = 1, ldots, m. For a two-class problem, the probability of class 1 given bfx is estimated by s(y(bfx)), where s(y) = 1/(1 + e-y). A Gaussian process prior is placed on y(bfx), and is combined with the training data to obtain predictions for new bfx points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multi-class problems (m >2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
This paper examines the extent to which foreign entry and exit in the UK is related to domestic industry characteristics. The units of analysis are firm numbers, and thus entry and exit at the industry level are treated as being generated by Poisson processes. This therefore uses quasimaximum likelihood estimation, to estimate entry and exit functions simultaneously. The results demonstrate that foreign entry is attracted by industry level profitability and performance, but that firm specific 'ownership' advantages are also important. The results also demonstrate that inward investors that are motivated by the desire to exploit firm-specific assets, are unlikely to be more transient than domestic firms. This however, cannot be said of those foreign entrants who are attracted to the UK by location advantage or investment incentives.
Resumo:
In order to survive in the increasingly customer-oriented marketplace, continuous quality improvement marks the fastest growing quality organization’s success. In recent years, attention has been focused on intelligent systems which have shown great promise in supporting quality control. However, only a small number of the currently used systems are reported to be operating effectively because they are designed to maintain a quality level within the specified process, rather than to focus on cooperation within the production workflow. This paper proposes an intelligent system with a newly designed algorithm and the universal process data exchange standard to overcome the challenges of demanding customers who seek high-quality and low-cost products. The intelligent quality management system is equipped with the ‘‘distributed process mining” feature to provide all levels of employees with the ability to understand the relationships between processes, especially when any aspect of the process is going to degrade or fail. An example of generalized fuzzy association rules are applied in manufacturing sector to demonstrate how the proposed iterative process mining algorithm finds the relationships between distributed process parameters and the presence of quality problems.
Resumo:
The study sought to understand the components of knowledge management strategy from the perspective of staff in UK manufacturing organizations. To analyse this topic, we took an empirical approach and collaborated with two manufacturing organizations. Our main finding centres on the key components of a knowledge management strategy, and the relationships between it and manufacturing strategy and corporate strategy. Other findings include: the nature of knowledge in manufacturing organizations; the relevance of (in)formal processes; top-down and bottom-up communication; taking ownership for information processes. We also make comments on the development of action plans for better knowledge management. The implications are that, for an integrated approach to knowledge management strategy in manufacturing organizations, involvement across the organization and at all levels is necessary.
Resumo:
This paper argues that it is possible to identify factors which pre-dispose organizations to adopt effective learning strategies and processes. It is hypothesized that effective OL is associated with: profitability, environmental uncertainty, structure, approach to HRM and quality orientation. The study focuses on forty-four manufacturing organizations, and draws on longitudinal data gathered through interviews. The findings suggest that two of these variables - approach to HRM and quality orientation - are particularly strongly correlated with measures of OL. It is concluded that effective learning mechanisms, with the potential to improve the quality of OL processes, are more likely to be established in businesses where HRM and quality initiatives are well established.
Resumo:
This study investigates the relationship between aggregate job satisfaction and organizational innovation. In a sample of manufacturing companies, data were gathered from 3717 employees in 28 UK manufacturing organizations about their job satisfaction and aggregated to the organizational level. Data on innovation in technology/processes were gathered from multiple respondents in the same organizations 24 months later. The results revealed that aggregate job satisfaction was a significant predictor of subsequent organizational innovation, even after controlling for prior organizational innovation and profitability. Moreover the data indicated that the relationship between aggregate job satisfaction and innovation in production technology/processes was moderated by two factors: job variety and a commitment to "single status". Unlike previous studies, we conceptualize job satisfaction at the aggregate rather than the individual level and examine innovation rather than creativity. We propose that where the majority of employees experience job satisfaction, they will endorse rather than resist innovation and work collaboratively to implement as well as to generate creative ideas.
Resumo:
The aim of this investigation was to study the chemical reactions occurring during the batchwise production of a butylated melamine-formaldehyde resin, in order to optimise the efficiency and economics of the batch processes. The batch process models are largely empirical in nature as the reaction mechanism is unknown. The process chemistry and the commercial manufacturing method are described. A small scale system was established in glass and the ability to produce laboratory resins with the required quality was demonstrated, simulating the full scale plant. During further experiments the chemical reactions of methylolation, condensation and butylation were studied. The important process stages were identified and studied separately. The effects of variation of certain process parameters on the chemical reactions were also studied. A published model of methylolation was modified and used to simulate the methylolation stage. A major result of this project was the development of an indirect method for studying the condensation and butylation reactions occurring during the dehydration and acid reaction stages, as direct quantitative methods were not available. A mass balance method was devised for this purpose and used to collect experimental data. The reaction scheme was verified using this data. The reactions stages were simulated using an empirical model. This has revealed new information regarding the mechanism and kinetics of the reactions. Laboratory results were shown to be comparable with plant scale results. This work has improved the understanding of the batch process, which can be used to improve product consistency. Future work has been identified and recommended to produce an optimum process and plant design to reduce the batch time.
Resumo:
We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c=1,...,m. For a two-class problem, the probability of class one given x is estimated by s(y(x)), where s(y)=1/(1+e-y). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multiclass problems (m>2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.