875 resultados para Continuous Variable Systems
Resumo:
In this paper, we address the use of CBR in collaboration with numerical engineering models. This collaborative combination has a particular application in engineering domains where numerical models are used. We term this domain “Case Based Engineering” (CBE), and present the general architecture of a CBE system. We define and discuss the general characteristics of CBE and the special problems which arise. These are: the handling of engineering constraints of both continuous and nominal kind; interpolation over both continuous and nominal variables, and conformability for interpolation. In order to illustrate the utility of the method proposed, and to provide practical examples of the general theory, the paper describes a practical application of the CBE architecture, known as CBE-CONVEYOR, which has been implemented by the authors.Pneumatic conveying is an important transportation technology in the solid bulks conveying industry. One of the major industry concerns is the attrition of powders and granules during pneumatic conveying. To minimize the fraction of particles during pneumatic conveying, engineers want to know what design parameters they should use in building a conveyor system. To do this, engineers often run simulations in a repetitive manner to find appropriate input parameters. CBE-Conveyor is shown to speed up conventional methods for searching for solutions, and to solve problems directly that would otherwise require considerable intervention from the engineer.
Resumo:
Heating in an idealised polymer load in a novel open-ended variable frequency microwave oven is numerically simulated using a couple solver approach. The frequency-agile microwave oven bonding system (FAMOBS)is developed to meet rapid polymer curing requirements in microelectronics and optoelectronics manufacturing. The heating of and idealised polymer load has been investigated through numerical modelling. Assessment of the system comprises of simulation of electromagnetic fields and of temperature distribution within the load. Initial simulation results are presented and contrasted with experimental analysis of field distribution
Resumo:
Curing of encapsulant material in a simplified microelectronics package using an open oven Variable Frequency Microwave (VFM) system is numerically simulated using a coupled solver approach. A numerical framework capable of simulating electromagnetic field distribution within the oven system, plus heat transfer, cure rate, degree of cure and thermally induced stresses within the encapsulant material is presented. The discrete physical processes have been integrated into a fully coupled solution, enabling usefully accurate results to be generated. Numerical results showing the heating and curing of the encapsulant material have been obtained and are presented in this contribution. The requirement to capture inter-process coupling and the variation in dielectric and thermophysical material properties is discussed and illustrated with simulation results.
Resumo:
Dual-section variable frequency microwave systems enable rapid, controllable heating of materials within an individual surface mount component in a chip-on=board assembly. The ability to process devices individually allows components with disparate processing requirements to be mounted on the same assembly. The temperature profile induced by the microwave system can be specifically tailored to the needs of the component, allowing optimisation and degree of cure whilst minimising thermomechanical stresses. This paper presents a review of dual-section microwave technology and its application to curing of thermosetting polymer materials in microelectronics applications. Curing processes using both conventional and microwave technologies are assessed and compared. Results indicate that dual-section microwave systems are able to cure individual surface mount packages in a significantly shorter time, at the expense of an increase in thermomechanical stresses and a greater variation in degree of cure.
Resumo:
A comparison between monthly mean ContinuousPlanktonRecorder (CPR) data and zooplankton data caught during winter and early spring with different sampling devices in the North Sea is presented to estimate the relative error in abundance of CPR measurements. CPR underestimates the abundance of zooplankton by a factor 25 during winter and early spring and by a factor 18 if Oithona spp. is not considered. This has serious implications for estimation of biomass as well as for modelling ecosystem dynamics.
Resumo:
Diatoms exist in almost every aquatic regime; they are responsible for 20% of global carbon fixation and 25% of global primary production, and are regarded as a key food for copepods, which are subsequently consumed by larger predators such as fish and marine mammals. A decreasing abundance and a vulnerability to climatic change in the North Atlantic Ocean have been reported in the literature. In the present work, a data matrix composed of concurrent satellite remote sensing and Continuous Plankton Recorder (CPR) in situ measurements was collated for the same spatial and temporal coverage in the Northeast Atlantic. Artificial neural networks (ANNs) were applied to recognize and learn the complex non-monotonic and non-linear relationships between diatom abundance and spatiotemporal environmental factors. Because of their ability to mimic non-linear systems, ANNs proved far more effective in modelling the diatom distribution in the marine ecosystem. The results of this study reveal that diatoms have a regular seasonal cycle, with their abundance most strongly influenced by sea surface temperature (SST) and light intensity. The models indicate that extreme positive SSTs decrease diatom abundances regardless of other climatic conditions. These results provide information on the ecology of diatoms that may advance our understanding of the potential response of diatoms to climatic change.
Resumo:
With the advent of new video standards such as MPEG-4 part-10 and H.264/H.26L, demands for advanced video coding, particularly in the area of variable block size video motion estimation (VBSME), are increasing. In this paper, we propose a new one-dimensional (1-D) very large-scale integration architecture for full-search VBSME (FSVBSME). The VBS sum of absolute differences (SAD) computation is performed by re-using the results of smaller sub-block computations. These are distributed and combined by incorporating a shuffling mechanism within each processing element. Whereas a conventional 1-D architecture can process only one motion vector (MV), this new architecture can process up to 41 MV sub-blocks (within a macroblock) in the same number of clock cycles.
Resumo:
Recently Ziman et al. [Phys. Rev. A 65, 042105 (2002)] have introduced a concept of a universal quantum homogenizer which is a quantum machine that takes as input a given (system) qubit initially in an arbitrary state rho and a set of N reservoir qubits initially prepared in the state xi. The homogenizer realizes, in the limit sense, the transformation such that at the output each qubit is in an arbitrarily small neighborhood of the state xi irrespective of the initial states of the system and the reservoir qubits. In this paper we generalize the concept of quantum homogenization for qudits, that is, for d-dimensional quantum systems. We prove that the partial-swap operation induces a contractive map with the fixed point which is the original state of the reservoir. We propose an optical realization of the quantum homogenization for Gaussian states. We prove that an incoming state of a photon field is homogenized in an array of beam splitters. Using Simon's criterion, we study entanglement between outgoing beams from beam splitters. We derive an inseparability condition for a pair of output beams as a function of the degree of squeezing in input beams.
Resumo:
A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.
Resumo:
A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness.
Resumo:
Treasure et al. (2004) recently proposed a new sub space-monitoring technique, based on the N4SID algorithm, within the multivariate statistical process control framework. This dynamic-monitoring method requires considerably fewer variables to be analysed when compared with dynamic principal component analysis (PCA). The contribution charts and variable reconstruction, traditionally employed for static PCA, are analysed in a dynamic context. The contribution charts and variable reconstruction may be affected by the ratio of the number of retained components to the total number of analysed variables. Particular problems arise if this ratio is large and a new reconstruction chart is introduced to overcome these. The utility of such a dynamic contribution chart and variable reconstruction is shown in a simulation and by application to industrial data from a distillation unit.
Resumo:
A conventional local model (LM) network consists of a set of affine local models blended together using appropriate weighting functions. Such networks have poor interpretability since the dynamics of the blended network are only weakly related to the underlying local models. In contrast, velocity-based LM networks employ strictly linear local models to provide a transparent framework for nonlinear modelling in which the global dynamics are a simple linear combination of the local model dynamics. A novel approach for constructing continuous-time velocity-based networks from plant data is presented. Key issues including continuous-time parameter estimation, correct realisation of the velocity-based local models and avoidance of the input derivative are all addressed. Application results are reported for the highly nonlinear simulated continuous stirred tank reactor process.