976 resultados para iterative algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a new algorithm for reconstructing phase-encoded velocity images of catalytic reactors from undersampled NMR acquisitions. Previous work on this application has employed total variation and nonlinear conjugate gradients which, although promising, yields unsatisfactory, unphysical visual results. Our approach leverages prior knowledge about the piecewise-smoothness of the phase map and physical constraints imposed by the system under study. We show how iteratively regularizing the real and imaginary parts of the acquired complex image separately in a shift-invariant wavelet domain works to produce a piecewise-smooth velocity map, in general. Using appropriately defined metrics we demonstrate higher fidelity to the ground truth and physical system constraints than previous methods for this specific application. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The photon iterative numerical technique, which chooses the outputs of the amplified spontaneous emission spectrum and lasing mode as iteration variables to solve the rate equations, is proposed and applied to analyse the steady behaviour of conventional semiconductor optical amplifiers (SOAs) and gain-clamped semiconductor optical amplifiers (GCSOAs). Numerical results show that the photon iterative method is a much faster and more efficient algorithm than the conventional approach, which chooses the carrier density distribution of the SOAs as the iterative variable. It is also found that the photon iterative method has almost the same computing efficiency for conventional SOAs and GCSOAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optimization method based on uniform design in conjunction with genetic algorithm is described. According to the proposed method, the uniform design technique was applied to the design of starting experiments, which can reduce the number of experiments compared with traditional simultaneous methods, such as simplex. And genetic algorithm was used in optimization procedure, which can improve the rapidity of optimal procedure. The hierarchical chromatographic response function was modified to evaluate the separation equality of a chromatogram. An iterative procedure was adopted to search for the optimal condition to improve the accuracy of predicted retention and the quality of the chromatogram. The optimization procedure was tested in optimization of the chromatographic separation of 11 alkaloids in reversed-phase ion pair chromatography and satisfactory optimal result was obtained. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

M. Galea and Q. Shen. Iterative vs Simultaneous Fuzzy Rule Induction. Proceedings of the 14th International Conference on Fuzzy Systems, pages 767-772.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although many feature selection methods for classification have been developed, there is a need to identify genes in high-dimensional data with censored survival outcomes. Traditional methods for gene selection in classification problems have several drawbacks. First, the majority of the gene selection approaches for classification are single-gene based. Second, many of the gene selection procedures are not embedded within the algorithm itself. The technique of random forests has been found to perform well in high-dimensional data settings with survival outcomes. It also has an embedded feature to identify variables of importance. Therefore, it is an ideal candidate for gene selection in high-dimensional data with survival outcomes. In this paper, we develop a novel method based on the random forests to identify a set of prognostic genes. We compare our method with several machine learning methods and various node split criteria using several real data sets. Our method performed well in both simulations and real data analysis.Additionally, we have shown the advantages of our approach over single-gene-based approaches. Our method incorporates multivariate correlations in microarray data for survival outcomes. The described method allows us to better utilize the information available from microarray data with survival outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A regularized algorithm for the recovery of band-limited signals from noisy data is described. The regularization is characterized by a single parameter. Iterative and non-iterative implementations of the algorithm are shown to have useful properties, the former offering the advantage of flexibility and the latter a potential for rapid data processing. Comparative results, using experimental data obtained in laser anemometry studies with a photon correlator, are presented both with and without regularization. © 1983 Taylor & Francis Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image inpainting refers to restoring a damaged image with missing information. The total variation (TV) inpainting model is one such method that simultaneously fills in the regions with available information from their surroundings and eliminates noises. The method works well with small narrow inpainting domains. However there remains an urgent need to develop fast iterative solvers, as the underlying problem sizes are large. In addition one needs to tackle the imbalance of results between inpainting and denoising. When the inpainting regions are thick and large, the procedure of inpainting works quite slowly and usually requires a significant number of iterations and leads inevitably to oversmoothing in the outside of the inpainting domain. To overcome these difficulties, we propose a solution for TV inpainting method based on the nonlinear multi-grid algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract To achieve higher flexibility and to better satisfy actual customer requirements, there is an increasing tendency to develop and deliver software in an incremental fashion. In adopting this process, requirements are delivered in releases and so a decision has to be made on which requirements should be delivered in which release. Three main considerations that need to be taken account of are the technical precedences inherent in the requirements, the typically conflicting priorities as determined by the representative stakeholders, as well as the balance between required and available effort. The technical precedence constraints relate to situations where one requirement cannot be implemented until another is completed or where one requirement is implemented in the same increment as another one. Stakeholder preferences may be based on the perceived value or urgency of delivered requirements to the different stakeholders involved. The technical priorities and individual stakeholder priorities may be in conflict and difficult to reconcile. This paper provides (i) a method for optimally allocating requirements to increments; (ii) a means of assessing and optimizing the degree to which the ordering conflicts with stakeholder priorities within technical precedence constraints; (iii) a means of balancing required and available resources for all increments; and (iv) an overall method called EVOLVE aimed at the continuous planning of incremental software development. The optimization method used is iterative and essentially based on a genetic algorithm. A set of the most promising candidate solutions is generated to support the final decision. The paper evaluates the proposed approach using a sample project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a Q-ary orthogonal direct-sequence code-division multiple-access (DS-CDMA) system with high-rate space-time linear dispersion codes (LDCs) in time-varying Rayleigh fading multiple-input-multiple-output (MIMO) channels. We propose a joint multiuser detection, LDC decoding, Q-ary demodulation, and channel-decoding algorithm and apply the turbo processing principle to improve system performance in an iterative fashion. The proposed iterative scheme demonstrates faster convergence and superior performance compared with the V-BLAST-based DS-CDMA system and is shown to approach the single-user performance bound. We also show that the CDMA system is able to exploit the time diversity offered by the LDCS in rapid-fading channels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A pipelined array multiplier which has been derived by applying 'systolic array' principles at the bit level is described. Attention is focused on a circuit which is used to multiply streams of parallel unsigned data. Then an algorithm is given which demonstrates that, with only a simple modification to the basic cell, the same array can cope with two's complement numbers. The resulting structure has a number of features whch make it attractive to LSI and VLSI. These include regularity and modularity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The iterative nature of turbo-decoding algorithms increases their complexity compare to conventional FEC decoding algorithms. Two iterative decoding algorithms, Soft-Output-Viterbi Algorithm (SOVA) and Maximum A posteriori Probability (MAP) Algorithm require complex decoding operations over several iteration cycles. So, for real-time implementation of turbo codes, reducing the decoder complexity while preserving bit-error-rate (BER) performance is an important design consideration. In this chapter, a modification to the Max-Log-MAP algorithm is presented. This modification is to scale the extrinsic information exchange between the constituent decoders. The remainder of this chapter is organized as follows: An overview of the turbo encoding and decoding processes, the MAP algorithm and its simplified versions the Log-MAP and Max-Log-MAP algorithms are presented in section 1. The extrinsic information scaling is introduced, simulation results are presented, and the performance of different methods to choose the best scaling factor is discussed in Section 2. Section 3 discusses trends and applications of turbo coding from the perspective of wireless applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decimal multiplication is an integral part of financial, commercial, and internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based on a novel RPS algorithm. This design uses n single digit multipliers for an n × n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.