62 resultados para Adaptive neuro-fuzzy inference system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the identification of complex dynamic systems using fuzzy neural networks, one of the main issues is the curse of dimensionality, which makes it difficult to retain a large number of system inputs or to consider a large number of fuzzy sets. Moreover, due to the correlations, not all possible network inputs or regression vectors in the network are necessary and adding them simply increases the model complexity and deteriorates the network generalisation performance. In this paper, the problem is solved by first proposing a fast algorithm for selection of network terms, and then introducing a refinement procedure to tackle the correlation issue. Simulation results show the efficacy of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design of a novel single chip adaptive beamformer capable of performing 50 Gflops, (Giga-floating-point operations/second). The core processor is a QR array implemented on a fully efficient linear systolic architecture, derived using a mapping that allows individual processors for boundary and internal cell operations. In addition, the paper highlights a number of rapid design techniques that have been used to realise this system. These include an architecture synthesis tool for quickly developing the circuit architecture and the utilisation of a library of parameterisable silicon intellectual property (IP) cores, to rapidly develop detailed silicon designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. The adaptive radiation of fishes into benthic (littoral) and pelagic (lentic) morphs in post-glaciallakes has become an important model system for speciation. Although these systems are well stud-ied, there is little evidence of the existence of morphs that have diverged to utilize resources in theremaining principal lake habitat, the profundal zone.
2. Here, we tested phenotype-environment correlations of three whitefish (Coregonus lavaretus)morphs that have radiated into littoral, pelagic and profundal niches in northern Scandinavianlakes. We hypothesized that morphs in such trimorphic systems would have a morphology adaptedto one of the principal lake habitats (littoral, pelagic or profundal niches). Most whitefish popula-tions in the study area are formed by a single (monomorphic) whitefish morph, and we furtherhypothesized that these populations should display intermediate morphotypes and niche utiliza-tion. We used a combination of traditional (stomach content, habitat use, gill raker counts) andmore recently developed (stable isotopes, geometric morphometrics) techniques to evaluate pheno-type-environment correlations in two lakes with trimorphic and two lakes with monomorphicwhitefish.
3. Distinct phenotype-environment correlations were evident for each principal niche in whitefishmorphs inhabiting trimorphic lakes. Monomorphic whitefish exploited multiple habitats, hadintermediate morphology, displayed increased variance in gillraker-counts, and relied significantlyon zooplankton, most likely due to relaxed resource competition.
4. We suggest that the ecological processes acting in the trimorphic lakes are similar to each other,and are driving the adaptive evolution of whitefish morphs, possibly leading to the formation ofnew species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the monitoring of complex nonlinear and time-varying processes. Kernel principal component analysis (KPCA) has gained significant attention as a monitoring tool for nonlinear systems in recent years but relies on a fixed model that cannot be employed for time-varying systems. The contribution of this article is the development of a numerically efficient and memory saving moving window KPCA (MWKPCA) monitoring approach. The proposed technique incorporates an up- and downdating procedure to adapt (i) the data mean and covariance matrix in the feature space and (ii) approximates the eigenvalues and eigenvectors of the Gram matrix. The article shows that the proposed MWKPCA algorithm has a computation complexity of O(N2), whilst batch techniques, e.g. the Lanczos method, are of O(N3). Including the adaptation of the number of retained components and an l-step ahead application of the MWKPCA monitoring model, the paper finally demonstrates the utility of the proposed technique using a simulated nonlinear time-varying system and recorded data from an industrial distillation column.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new technique based on adaptive code-to-user allocation for interference management on the downlink of BPSK based TDD DS-CDMA systems is presented. The principle of the proposed technique is to exploit the dependency of multiple access interference on the instantaneous symbol values of the active users. The objective is to adaptively allocate the available spreading sequences to users on a symbol-by-symbol basis to optimize the decision variables at the downlink receivers. The presented simulations show an overall system BER performance improvement of more than an order of a magnitude with the proposed technique while the adaptation overhead is kept less than 10% of the available bandwidth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a hybrid approach to the experimental assessment of the genuine quantum features of a general system consisting of microscopic and macroscopic parts. We infer entanglement by combining dichotomic measurements on a bidimensional system and phase-space inference through the Wigner distribution associated with the macroscopic component of the state. As a benchmark, we investigate the feasibility of our proposal in a bipartite-entangled state composed of a single-photon and a multiphoton field. Our analysis shows that, under ideal conditions, maximal violation of a Clauser-Horne-Shimony-Holt-based inequality is achievable regardless of the number of photons in the macroscopic part of the state. The difficulty in observing entanglement when losses and detection inefficiency are included can be overcome by using a hybrid entanglement witness that allows efficient correction for losses in the few-photon regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy-neural-network-based inference systems are well-known universal approximators which can produce linguistically interpretable results. Unfortunately, their dimensionality can be extremely high due to an excessive number of inputs and rules, which raises the need for overall structure optimization. In the literature, various input selection methods are available, but they are applied separately from rule selection, often without considering the fuzzy structure. This paper proposes an integrated framework to optimize the number of inputs and the number of rules simultaneously. First, a method is developed to select the most significant rules, along with a refinement stage to remove unnecessary correlations. An improved information criterion is then proposed to find an appropriate number of inputs and rules to include in the model, leading to a balanced tradeoff between interpretability and accuracy. Simulation results confirm the efficacy of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the trajectory control of a 2DOF mini electro-hydraulic excavator by using fuzzy self tuning with neural network algorithm. First, the mathematical model is derived for the 2DOF mini electro-hydraulic excavator. The fuzzy PID and fuzzy self tuning with neural network are designed for circle trajectory following. Its two links are driven by an electric motor controlled pump system. The experimental results demonstrated that the proposed controllers have better control performance than the conventional controller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shapememoryalloy (SMA) actuators, which have the ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Nonlinearity hysteresis effects existing in SMA actuators present a problem in the motion control of these smart actuators. This paper investigates the control problem of SMA actuators in both simulation and experiment. In the simulation, the numerical Preisachmodel with geometrical interpretation is used for hysteresis modeling of SMA actuators. This model is then incorporated in a closed loop PID control strategy. The optimal values of PID parameters are determined by using geneticalgorithm to minimize the mean squared error between desired output displacement and simulated output. However, the control performance is not good compared with the simulation results when these parameters are applied to the real SMA control since the system is disturbed by unknown factors and changes in the surrounding environment of the system. A further automated readjustment of the PID parameters using fuzzylogic is proposed for compensating the limitation. To demonstrate the effectiveness of the proposed controller, real time control experiment results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a human-computer dialogue system, the dialogue strategy can range from very restrictive to highly flexible. Each specific dialogue style has its pros and cons and a dialogue system needs to select the most appropriate style for a given user. During the course of interaction, the dialogue style can change based on a user’s response and the system observation of the user. This allows a dialogue system to understand a user better and provide a more suitable way of communication. Since measures of the quality of the user’s interaction with the system can be incomplete and uncertain, frameworks for reasoning with uncertain and incomplete information can help the system make better decisions when it chooses a dialogue strategy. In this paper, we investigate how to select a dialogue strategy based on aggregating the factors detected during the interaction with the user. For this purpose, we use probabilistic logic programming (PLP) to model probabilistic knowledge about how these factors will affect the degree of freedom of a dialogue. When a dialogue system needs to know which strategy is more suitable, an appropriate query can be executed against the PLP and a probabilistic solution with a degree of satisfaction is returned. The degree of satisfaction reveals how much the system can trust the probability attached to the solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inherent difficulty of thread-based shared-memory programming has recently motivated research in high-level, task-parallel programming models. Recent advances of Task-Parallel models add implicit synchronization, where the system automatically detects and satisfies data dependencies among spawned tasks. However, dynamic dependence analysis incurs significant runtime overheads, because the runtime must track task resources and use this information to schedule tasks while avoiding conflicts and races.
We present SCOOP, a compiler that effectively integrates static and dynamic analysis in code generation. SCOOP combines context-sensitive points-to, control-flow, escape, and effect analyses to remove redundant dependence checks at runtime. Our static analysis can work in combination with existing dynamic analyses and task-parallel runtimes that use annotations to specify tasks and their memory footprints. We use our static dependence analysis to detect non-conflicting tasks and an existing dynamic analysis to handle the remaining dependencies. We evaluate the resulting hybrid dependence analysis on a set of task-parallel programs.