176 resultados para Control-Display Systems.
Resumo:
As a result of resource limitations, state in branch predictors is frequently shared between uncorrelated branches. This interference can significantly limit prediction accuracy. In current predictor designs, the branches sharing prediction information are determined by their branch addresses and thus branch groups are arbitrarily chosen during compilation. This feasibility study explores a more analytic and systematic approach to classify branches into clusters with similar behavioral characteristics. We present several ways to incorporate this cluster information as an additional information source in branch predictors.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
In this paper, a novel framework for visual tracking of human body parts is introduced. The approach presented demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera by using a limb-tracking system based on a 2-D articulated model and a double-tracking strategy. Its key contribution is that the 2-D model is only constrained by biomechanical knowledge about human bipedal motion, instead of relying on constraints that are linked to a specific activity or camera view. These characteristics make our approach suitable for real visual surveillance applications. Experiments on a set of indoor and outdoor sequences demonstrate the effectiveness of our method on tracking human lower body parts. Moreover, a detail comparison with current tracking methods is presented.
Resumo:
In many situations, the number of data points is fixed, and the asymptotic convergence results of popular model selection tools may not be useful. A new algorithm for model selection, RIVAL (removing irrelevant variables amidst Lasso iterations), is presented and shown to be particularly effective for a large but fixed number of data points. The algorithm is motivated by an application of nuclear material detection where all unknown parameters are to be non-negative. Thus, positive Lasso and its variants are analyzed. Then, RIVAL is proposed and is shown to have some desirable properties, namely the number of data points needed to have convergence is smaller than existing methods.
Resumo:
In this paper, a data driven orthogonal basis function approach is proposed for non-parametric FIR nonlinear system identification. The basis functions are not fixed a priori and match the structure of the unknown system automatically. This eliminates the problem of blindly choosing the basis functions without a priori structural information. Further, based on the proposed basis functions, approaches are proposed for model order determination and regressor selection along with their theoretical justifications. © 2008 IEEE.
Resumo:
This paper proposes max separation clustering (MSC), a new non-hierarchical clustering method used for feature extraction from optical emission spectroscopy (OES) data for plasma etch process control applications. OES data is high dimensional and inherently highly redundant with the result that it is difficult if not impossible to recognize useful features and key variables by direct visualization. MSC is developed for clustering variables with distinctive patterns and providing effective pattern representation by a small number of representative variables. The relationship between signal-to-noise ratio (SNR) and clustering performance is highlighted, leading to a requirement that low SNR signals be removed before applying MSC. Experimental results on industrial OES data show that MSC with low SNR signal removal produces effective summarization of the dominant patterns in the data.
Resumo:
In recent years unmanned vehicles have grown in popularity, with an ever increasing number of applications in industry, the military and research within air, ground and marine domains. In particular, the challenges posed by unmanned marine vehicles in order to increase the level of autonomy include automatic obstacle avoidance and conformance with the Rules of the Road when navigating in the presence of other maritime traffic. The USV Master Plan which has been established for the US Navy outlines a list of objectives for improving autonomy in order to increase mission diversity and reduce the amount of supervisory intervention. This paper addresses the specific development needs based on notable research carried out to date, primarily with regard to navigation, guidance, control and motion planning. The integration of the International Regulations for Avoiding Collisions at Sea within the obstacle avoidance protocols seeks to prevent maritime accidents attributed to human error. The addition of these critical safety measures may be key to a future growth in demand for USVs, as they serve to pave the way for establishing legal policies for unmanned vessels.
Resumo:
An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.
Resumo:
When multiple sources provide information about the same unknown quantity, their fusion into a synthetic interpretable message is often a tedious problem, especially when sources are conicting. In this paper, we propose to use possibility theory and the notion of maximal coherent subsets, often used in logic-based representations, to build a fuzzy belief structure that will be instrumental both for extracting useful information about various features of the information conveyed by the sources and for compressing this information into a unique possibility distribution. Extensions and properties of the basic fusion rule are also studied.
Resumo:
Previous research based on theoretical simulations has shown the potential of the wavelet transform to detect damage in a beam by analysing the time-deflection response due to a constant moving load. However, its application to identify damage from the response of a bridge to a vehicle raises a number of questions. Firstly, it may be difficult to record the difference in the deflection signal between a healthy and a slightly damaged structure to the required level of accuracy and high scanning frequencies in the field. Secondly, the bridge is going to have a road profile and it will be loaded by a sprung vehicle and time-varying forces rather than a constant load. Therefore, an algorithm based on a plot of wavelet coefficients versus time to detect damage (a singularity in the plot) appears to be very sensitive to noise. This paper addresses these questions by: (a) using the acceleration signal, instead of the deflection signal, (b) employing a vehicle-bridge finite element interaction model, and (c) developing a novel wavelet-based approach using wavelet energy content at each bridge section which proves to be more sensitive to damage than a wavelet coefficient line plot at a given scale as employed by others.
Resumo:
The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.
Resumo:
DDR-SDRAM based data lookup techniques are evolving into a core technology for packet lookup applications for data network, benefitting from the features of high density, high bandwidth and low price of DDR memory products in the market. Our proposed DDR-SDRAM based lookup circuit is capable of achieving IP header lookup for network line-rates of up to 10Gbps, providing a solution on high-performance and economic packet header inspections. ©2008 IEEE.
Resumo:
Power back-off performances of a new variant power-combining Class-E amplifier under different amplitude-modulation schemes such as continuous wave (CW), envelope elimination and restoration (EER), envelope tracking (ET) and outphasing are for the first time investigated in this study. Finite DC-feed inductances rather than massive RF chokes as used in the classic single-ended Class-E power amplifier (PA) resulted from the approximate yet effective frequency-domain circuit analysis provide the wherewithal to increase modulation bandwidth up to 80% higher than the classic single-ended Class-E PA. This increased modulation bandwidth is required for the linearity improvement in the EER/ET transmitters. The modified output load network of the power-combining Class-E amplifier adopting three-harmonic terminations technique relaxes the design specifications for the additional filtering block typically required at the output stage of the transmitter chain. Qualitative agreements between simulation and measurement results for all four schemes were achieved where the ET technique was proven superior to the other schemes. When the PA is used within the ET scheme, an increase of average drain efficiency of as high as 40% with respect to the CW excitation was obtained for a multi-carrier input signal with 12 dB peak-to-average power ratio. © 2011 The Institution of Engineering and Technology.