894 resultados para gaussian mixture model
Resumo:
Silicone elastomer systems have previously been shown to offer potential for the sustained release of protein therapeutics. However, the general requirement for the incorporation of large amounts of release enhancing solid excipients to achieve therapeutically effective release rates from these otherwise hydrophobic polymer systems can detrimentally affect the viscosity of the precure silicone elastomer mixture and its curing characteristics. The increase in viscosity necessitates the use of higher operating pressures in manufacture, resulting in higher shear stresses that are often detrimental to the structural integrity of the incorporated protein. The addition of liquid silicones increases the initial tan delta value and the tan delta values in the early stages of curing by increasing the liquid character (G '') of the silicone elastomer system and reducing its elastic character (G'), thereby reducing the shear stress placed on the formulation during manufacture and minimizing the potential for protein degradation. However, SEM analysis has demonstrated that if the liquid character of the silicone elastomer is too high, the formulation will be unable to fill the mold during manufacture. This study demonstrates that incorporation of liquid hydroxy-terminated polydimethylsiloxanes into addition-cure silicone elastomer-covered rod formulations can both effectively lower the viscosity of the precured silicone elastomer and enhance the release rate of the model therapeutic protein bovine serum albumin. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2011
Resumo:
It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.
Resumo:
In this paper we investigate the influence of a power-law noise model, also called noise, on the performance of a feed-forward neural network used to predict time series. We introduce an optimization procedure that optimizes the parameters the neural networks by maximizing the likelihood function based on the power-law model. We show that our optimization procedure minimizes the mean squared leading to an optimal prediction. Further, we present numerical results applying method to time series from the logistic map and the annual number of sunspots demonstrate that a power-law noise model gives better results than a Gaussian model.
Resumo:
The ammonia oxidation reaction on supported polycrystalline platinum catalyst was investigated in an aluminum-based microreactor. An extensive set of reactions was included in the chemical reactor modeling to facilitate the construction of a kinetic model capable of satisfactory predictions for a wide range of conditions (NH3 partial pressure, 0.01-0.12 atm; O-2 partial pressure, 0.10-0.88 atm; temperature, 523-673 K; contact time, 0.3-0.7 ms). The elementary surface reactions used in developing the mechanism were chosen based on the literature data concerning ammonia oxidation on a Pt catalyst. Parameter estimates for the kinetic model were obtained using multi-response least squares regression analysis using the isothermal plug-flow reactor approximation. To evaluate the model, the behavior of a microstructured reactor was simulated by means of a complete Navier-Stokes model accounting for the reactions on the catalyst surface and the effect of temperature on the physico-chemical properties of the reacting mixture. In this way, the effect of the catalytic wall temperature non-uniformity and the effect of a boundary layer on the ammonia conversion and selectivity were examined. After further optimization of appropriate kinetic parameters, the calculated selectivities and product yields agree very well with the values actually measured in the microreactor. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, we show how interacting and occluding targets can be tackled successfully within a Gaussian approximation. For that purpose, we develop a general expansion of the mean and covariance of the posterior and we consider a first order approximation of it. The proposed method differs from EKF in that neither a non-linear dynamical model nor a non-linear measurement vector to state relation have to be defined, so it works with any kind of interaction potential and likelihood. The approach has been tested on three sequences (10400, 2500, and 400 frames each one). The results show that our approach helps to reduce the number of failures without increasing too much the computation time with respect to methods that do not take into account target interactions.
Resumo:
The Finite Difference Time Domain (FDTD) method is becoming increasingly popular for room acoustics simulation. Yet, the literature on grid excitation methods is relatively sparse, and source functions are traditionally implemented in a hard or additive form
using arbitrarily-shaped functions which do not necessarily obey the physical laws of sound generation. In this paper we formulate
a source function based on a small pulsating sphere model. A physically plausible method to inject a source signal into the grid
is derived from first principles, resulting in a source with a near-flat spectrum that does not scatter incoming waves. In the final
discrete-time formulation, the source signal is the result of passing a Gaussian pulse through a digital filter simulating the dynamics of the pulsating sphere, hence facilitating a physically correct means to design source functions that generate a prescribed sound field.
Resumo:
We propose the inverse Gaussian distribution, as a less complex alternative to the classical log-normal model, to describe turbulence-induced fading in free-space optical (FSO) systems operating in weak turbulence conditions and/or in the presence of aperture averaging effects. By conducting goodness of fit tests, we define the range of values of the scintillation index for various multiple-input multiple-output (MIMO) FSO configurations, where the two distributions approximate each other with a certain significance level. Furthermore, the bit error rate performance of two typical MIMO FSO systems is investigated over the new turbulence model; an intensity-modulation/direct detection MIMO FSO system with Q-ary pulse position modulation that employs repetition coding at the transmitter and equal gain combining at the receiver, and a heterodyne MIMO FSO system with differential phase-shift keying and maximal ratio combining at the receiver. Finally, numerical results are presented that validate the theoretical analysis and provide useful insights into the implications of the model parameters on the overall system performance. © 2011 IEEE.
Resumo:
This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.
Resumo:
Plasma etch is a key process in modern semiconductor manufacturing facilities as it offers process simplification and yet greater dimensional tolerances compared to wet chemical etch technology. The main challenge of operating plasma etchers is to maintain a consistent etch rate spatially and temporally for a given wafer and for successive wafers processed in the same etch tool. Etch rate measurements require expensive metrology steps and therefore in general only limited sampling is performed. Furthermore, the results of measurements are not accessible in real-time, limiting the options for run-to-run control. This paper investigates a Virtual Metrology (VM) enabled Dynamic Sampling (DS) methodology as an alternative paradigm for balancing the need to reduce costly metrology with the need to measure more frequently and in a timely fashion to enable wafer-to-wafer control. Using a Gaussian Process Regression (GPR) VM model for etch rate estimation of a plasma etch process, the proposed dynamic sampling methodology is demonstrated and evaluated for a number of different predictive dynamic sampling rules. © 2013 IEEE.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from Expectation Maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes.
Resumo:
Cascade control is one of the routinely used control strategies in industrial processes because it can dramatically improve the performance of single-loop control, reducing both the maximum deviation and the integral error of the disturbance response. Currently, many control performance assessment methods of cascade control loops are developed based on the assumption that all the disturbances are subject to Gaussian distribution. However, in the practical condition, several disturbance sources occur in the manipulated variable or the upstream exhibits nonlinear behaviors. In this paper, a general and effective index of the performance assessment of the cascade control system subjected to the unknown disturbance distribution is proposed. Like the minimum variance control (MVC) design, the output variances of the primary and the secondary loops are decomposed into a cascade-invariant and a cascade-dependent term, but the estimated ARMA model for the cascade control loop based on the minimum entropy, instead of the minimum mean squares error, is developed for non-Gaussian disturbances. Unlike the MVC index, an innovative control performance index is given based on the information theory and the minimum entropy criterion. The index is informative and in agreement with the expected control knowledge. To elucidate wide applicability and effectiveness of the minimum entropy cascade control index, a simulation problem and a cascade control case of an oil refinery are applied. The comparison with MVC based cascade control is also included.
Resumo:
The number of elderly patients requiring hospitalisation in Europe is rising. With a greater proportion of elderly people in the population comes a greater demand for health services and, in particular, hospital care. Thus, with a growing number of elderly patients requiring hospitalisation competing with non-elderly patients for a fixed (and in some cases, decreasing) number of hospital beds, this results in much longer waiting times for patients, often with a less satisfactory hospital experience. However, if a better understanding of the recurring nature of elderly patient movements between the community and hospital can be developed, then it may be possible for alternative provisions of care in the community to be put in place and thus prevent readmission to hospital. The research in this paper aims to model the multiple patient transitions between hospital and community by utilising a mixture of conditional Coxian phase-type distributions that incorporates Bayes' theorem. For the purpose of demonstration, the results of a simulation study are presented and the model is applied to hospital readmission data from the Lombardy region of Italy.
Resumo:
In dynamic spectrum access networks, cognitive radio terminals monitor their spectral environment in order to detect and opportunistically access unoccupied frequency channels. The overall performance of such networks depends on the spectrum occupancy or availability patterns. Accurate knowledge on the channel availability enables optimum performance of such networks in terms of spectrum and energy efficiency. This work proposes a novel probabilistic channel availability model that can describe the channel availability in different polarizations for mobile cognitive radio terminals that are likely to change their orientation during their operation. A Gaussian approximation is used to model the empirical occupancy data that was obtained through a measurement campaign in the cellular frequency bands within a realistic operational scenario.
Resumo:
Few models can explain Mach bands (Pessoa, 1996 Vision Research 36 3205-3227) . Our own employs multiscale line and edge coding by simple and complex cells. Lines are interpreted by Gaussian functions, edges by bipolar, Gaussian-truncated errorfunctions. Widths of these functions are coupled to the scales of the underlying cells and the amplitudes are determined by their responses.