969 resultados para Systems engineering


Relevância:

60.00% 60.00%

Publicador:

Resumo:

When multiple sources provide information about the same unknown quantity, their fusion into a synthetic interpretable message is often a tedious problem, especially when sources are conicting. In this paper, we propose to use possibility theory and the notion of maximal coherent subsets, often used in logic-based representations, to build a fuzzy belief structure that will be instrumental both for extracting useful information about various features of the information conveyed by the sources and for compressing this information into a unique possibility distribution. Extensions and properties of the basic fusion rule are also studied.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous research based on theoretical simulations has shown the potential of the wavelet transform to detect damage in a beam by analysing the time-deflection response due to a constant moving load. However, its application to identify damage from the response of a bridge to a vehicle raises a number of questions. Firstly, it may be difficult to record the difference in the deflection signal between a healthy and a slightly damaged structure to the required level of accuracy and high scanning frequencies in the field. Secondly, the bridge is going to have a road profile and it will be loaded by a sprung vehicle and time-varying forces rather than a constant load. Therefore, an algorithm based on a plot of wavelet coefficients versus time to detect damage (a singularity in the plot) appears to be very sensitive to noise. This paper addresses these questions by: (a) using the acceleration signal, instead of the deflection signal, (b) employing a vehicle-bridge finite element interaction model, and (c) developing a novel wavelet-based approach using wavelet energy content at each bridge section which proves to be more sensitive to damage than a wavelet coefficient line plot at a given scale as employed by others.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

DDR-SDRAM based data lookup techniques are evolving into a core technology for packet lookup applications for data network, benefitting from the features of high density, high bandwidth and low price of DDR memory products in the market. Our proposed DDR-SDRAM based lookup circuit is capable of achieving IP header lookup for network line-rates of up to 10Gbps, providing a solution on high-performance and economic packet header inspections. ©2008 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power back-off performances of a new variant power-combining Class-E amplifier under different amplitude-modulation schemes such as continuous wave (CW), envelope elimination and restoration (EER), envelope tracking (ET) and outphasing are for the first time investigated in this study. Finite DC-feed inductances rather than massive RF chokes as used in the classic single-ended Class-E power amplifier (PA) resulted from the approximate yet effective frequency-domain circuit analysis provide the wherewithal to increase modulation bandwidth up to 80% higher than the classic single-ended Class-E PA. This increased modulation bandwidth is required for the linearity improvement in the EER/ET transmitters. The modified output load network of the power-combining Class-E amplifier adopting three-harmonic terminations technique relaxes the design specifications for the additional filtering block typically required at the output stage of the transmitter chain. Qualitative agreements between simulation and measurement results for all four schemes were achieved where the ET technique was proven superior to the other schemes. When the PA is used within the ET scheme, an increase of average drain efficiency of as high as 40% with respect to the CW excitation was obtained for a multi-carrier input signal with 12 dB peak-to-average power ratio. © 2011 The Institution of Engineering and Technology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a thorough investigation of the combined allocator design for Networks-on-Chip (NoC). Particularly, we discuss the interlock of the combined NoC allocator, which is caused by the lock mechanism of priority updating between the local and global arbiters. Architectures and implementations of three interlock-free combined allocators are presented in detail. Their cost, critical path, as well as network level performance are demonstrated based on 65-nm standard cell technology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper contributes to and expands on the Nakagami-m phase model. It derives exact, closed-form expressions for both the phase cumulative distribution function and its inverse. In addition, empirical first- and second-order statistics obtained from measurements conducted in a body-area network scenario were used to fit the phase probability density function, the phase cumulative distribution function, and the phase crossing rate expressions. Remarkably, the unlikely shapes of the phase statistics, as predicted by the theoretical formulations, are actually encountered in practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper emerged from work supported by EPSRC grant GR/S84354/01 and proposes a method of determining principal curves, using spline functions, in principal component analysis (PCA) for the representation of non-linear behaviour in process monitoring. Although principal curves are well established, they are difficult to implement in practice if a large number of variables are analysed. The significant contribution of this paper is that the proposed method has minimal complexity, assuming simple spline geometry, thus enabling efficient computation. The paper provides a foundation for further work where multiple curves may be required to represent underlying non-linear information in complex data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a newly proposed machining method named “surface defect machining” (SDM) [Wear, 302, 2013 (1124-1135)] was explored for machining of nanocrystalline beta silicon carbide (3C-SiC) at 300K using MD simulation. The results were compared with isothermal high temperature machining at 1200K under the same machining parameters, emulating ductile mode micro laser assisted machining (µ-LAM) and with conventional cutting at 300 K. In the MD simulation, surface defects were generated on the top of the (010) surface of the 3C-SiC work piece prior to cutting, and the workpiece was then cut along the <100> direction using a single point diamond tool at a cutting speed of 10 m/sec. Cutting forces, sub-surface deformation layer depth, temperature in the shear zone, shear plane angle and friction coefficient were used to characterize the response of the workpiece. Simulation results showed that SDM provides a unique advantage of decreased shear plane angle which eases the shearing action. This in turn causes an increased value of average coefficient of friction in contrast to the isothermal cutting (carried at 1200 K) and normal cutting (carried at 300K). The increase of friction coefficient however was found to aid the cutting action of the tool due to an intermittent dropping in the cutting forces, lowering stresses on the cutting tool and reducing operational temperature. Analysis shows that the introduction of surface defects prior to conventional machining can be a viable choice for machining a wide range of ceramics, hard steels and composites compared to hot machining.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new domain-specific reconfigurable sub-pixel interpolation architecture for multi-standard video Motion Estimation (ME) is presented. The mixed use of parallel and serial-input FIR filters achieves high throughput rate and efficient silicon utilisation. Flexibility has been achieved by using a multiplexed reconfigurable data-path controlled by a selection signal. Silicon design studies show that this can be implemented using 34.8K gates with area and performance that compares very favourably with existing fixed solutions based solely on the H.264 standard. ©2008 IEEE.