65 resultados para Uncertain nonlinear systems
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
According to the Mickael's selection theorem any surjective continuous linear operator from one Fr\'echet space onto another has a continuous (not necessarily linear) right inverse. Using this theorem Herzog and Lemmert proved that if $E$ is a Fr\'echet space and $T:E\to E$ is a continuous linear operator such that the Cauchy problem $\dot x=Tx$, $x(0)=x_0$ is solvable in $[0,1]$ for any $x_0\in E$, then for any $f\in C([0,1],E)$, there exists a continuos map $S:[0,1]\times E\to E$, $(t,x)\mapsto S_tx$ such that for any $x_0\in E$, the function $x(t)=S_tx_0$ is a solution of the Cauchy problem $\dot x(t)=Tx(t)+f(t)$, $x(0)=x_0$ (they call $S$ a fundamental system of solutions of the equation $\dot x=Tx+f$). We prove the same theorem, replacing "continuous" by "sequentially continuous" for locally convex spaces from a class which contains strict inductive limits of Fr\'echet spaces and strong duals of Fr\'echet--Schwarz spaces and is closed with respect to finite products and sequentially closed subspaces. The key-point of the proof is an extension of the theorem on existence of a sequentially continuous right inverse of any surjective sequentially continuous linear operator to some class of non-metrizable locally convex spaces.
Resumo:
We introduce and characterise time operators for unilateral shifts and exact endomorphisms. The associated shift representation of evolution is related to the spectral representation by a generalized Fourier transform. We illustrate the results for a simple exact system, namely the Renyi map.
Resumo:
This article presents a novel classification of wavelet neural networks based on the orthogonality/non-orthogonality of neurons and the type of nonlinearity employed. On the basis of this classification different network types are studied and their characteristics illustrated by means of simple one-dimensional nonlinear examples. For multidimensional problems, which are affected by the curse of dimensionality, the idea of spherical wavelet functions is considered. The behaviour of these networks is also studied for modelling of a low-dimension map.
Resumo:
This paper introduces two new techniques for determining nonlinear canonical correlation coefficients between two variable sets. A genetic strategy is incorporated to determine these coefficients. Compared to existing methods for nonlinear canonical correlation analysis (NLCCA), the benefits here are that the nonlinear mapping requires fewer parameters to be determined, consequently a more parsimonious NLCCA model can be established which is therefore simpler to interpret. A further contribution of the paper is the investigation of a variety of nonlinear deflation procedures for determining the subsequent nonlinear canonical coefficients. The benefits of the new approaches presented are demonstrated by application to an example from the literature and to recorded data from an industrial melter process. These studies show the advantages of the new NLCCA techniques presented and suggest that a nonlinear deflation procedure should be considered. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Coffee model systems prepared from combinations of chlorogenic acid (CGA), N-alpha-acetyl-1-arginine (A), sucrose (S), and cellulose (C) were roasted at 240 degreesC for 4 min prior to analysis by UV-visible spectrophotometry, capillary zone electrophoresis (CZE), and the ABTS radical cation decolorization assay. The A/CGA/S/C and A/S/C systems were also fractionated by gel filtration chromatography. Antioxidant activity of the systems showed a positive, nonlinear relationship with the amount of CGA remaining after roasting. Sucrose degradation was a major source of color in the heated systems. There was no relationship between antioxidant activity and color generation.
Resumo:
The need to merge multiple sources of uncertaininformation is an important issue in many application areas,especially when there is potential for contradictions betweensources. Possibility theory offers a flexible framework to represent,and reason with, uncertain information, and there isa range of merging operators, such as the conjunctive anddisjunctive operators, for combining information. However, withthe proposals to date, the context of the information to be mergedis largely ignored during the process of selecting which mergingoperators to use. To address this shortcoming, in this paper,we propose an adaptive merging algorithm which selects largelypartially maximal consistent subsets (LPMCSs) of sources, thatcan be merged through relaxation of the conjunctive operator, byassessing the coherence of the information in each subset. In thisway, a fusion process can integrate both conjunctive and disjunctiveoperators in a more flexible manner and thereby be morecontext dependent. A comparison with related merging methodsshows how our algorithm can produce a more consensual result.
Resumo:
Use of the Dempster-Shafer (D-S) theory of evidence to deal with uncertainty in knowledge-based systems has been widely addressed. Several AI implementations have been undertaken based on the D-S theory of evidence or the extended theory. But the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major problem. This paper presents an approach to representing such knowledge, in which Yen’s probabilistic multi-set mappings have been extended to evidential mappings, and Shafer’s partition technique is used to get the mass function in a complex evidence space. Then, a new graphic method for describing the knowledge is introduced which is an extension of the graphic model by Lowrance et al. Finally, an extended framework for evidential reasoning systems is specified.
Resumo:
In this paper, we propose a novel linear transmit precoding strategy for multiple-input, multiple-output (MIMO) systems employing improper signal constellations. In particular, improved zero-forcing (ZF) and minimum mean square error (MMSE) precoders are derived based on modified cost functions, and are shown to achieve a superior performance without loss of spectrum efficiency compared to the conventional linear and nonlinear precoders. The superiority of the proposed precoders over the conventional solutions are verified by both simulation and analytical results. The novel approach to precoding design is also applied to the case of an imperfect channel estimate with a known error covariance as well as to the multi-user scenario where precoding based on the nullspace of channel transmission matrix is employed to decouple multi-user channels. In both cases, the improved precoding schemes yield significant performance gain compared to the conventional counterparts.
Resumo:
A theory of strongly interacting Fermi systems of a few particles is developed. At high excit at ion energies (a few times the single-parti cle level spacing) these systems are characterized by an extreme degree of complexity due to strong mixing of the shell-model-based many-part icle basis st at es by the residual two- body interaction. This regime can be described as many-body quantum chaos. Practically, it occurs when the excitation energy of the system is greater than a few single-particle level spacings near the Fermi energy. Physical examples of such systems are compound nuclei, heavy open shell atoms (e.g. rare earths) and multicharged ions, molecules, clusters and quantum dots in solids. The main quantity of the theory is the strength function which describes spreading of the eigenstates over many-part icle basis states (determinants) constructed using the shell-model orbital basis. A nonlinear equation for the strength function is derived, which enables one to describe the eigenstates without diagonalization of the Hamiltonian matrix. We show how to use this approach to calculate mean orbital occupation numbers and matrix elements between chaotic eigenstates and introduce typically statistical variable s such as t emperature in an isolated microscopic Fermi system of a few particles.
Resumo:
Patterns forming spontaneously in extended, three-dimensional, dissipative systems are likely to excite several homogeneous soft modes (approximate to hydrodynamic modes) of the underlying physical system, much more than quasi-one- (1D) and two-dimensional (2D) patterns are. The reason is the lack of damping boundaries. This paper compares two analytic techniques to derive the pattern dynamics from hydrodynamics, which are usually equivalent but lead to different results when applied to multiple homogeneous soft modes. Dielectric electroconvection in nematic liquid crystals is introduced as a model for 3D pattern formation. The 3D pattern dynamics including soft modes are derived. For slabs of large but finite thickness the description is reduced further to a 2D one. It is argued that the range of validity of 2D descriptions is limited to a very small region above threshold. The transition from 2D to 3D pattern dynamics is discussed. Experimentally testable predictions for the stable range of ideal patterns and the electric Nusselt numbers are made. For most results analytic approximations in terms of material parameters are given. [S1063-651X(00)09512-X].