966 resultados para iterative algorithm
Resumo:
Although many feature selection methods for classification have been developed, there is a need to identify genes in high-dimensional data with censored survival outcomes. Traditional methods for gene selection in classification problems have several drawbacks. First, the majority of the gene selection approaches for classification are single-gene based. Second, many of the gene selection procedures are not embedded within the algorithm itself. The technique of random forests has been found to perform well in high-dimensional data settings with survival outcomes. It also has an embedded feature to identify variables of importance. Therefore, it is an ideal candidate for gene selection in high-dimensional data with survival outcomes. In this paper, we develop a novel method based on the random forests to identify a set of prognostic genes. We compare our method with several machine learning methods and various node split criteria using several real data sets. Our method performed well in both simulations and real data analysis.Additionally, we have shown the advantages of our approach over single-gene-based approaches. Our method incorporates multivariate correlations in microarray data for survival outcomes. The described method allows us to better utilize the information available from microarray data with survival outcomes.
Resumo:
PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.
Resumo:
A regularized algorithm for the recovery of band-limited signals from noisy data is described. The regularization is characterized by a single parameter. Iterative and non-iterative implementations of the algorithm are shown to have useful properties, the former offering the advantage of flexibility and the latter a potential for rapid data processing. Comparative results, using experimental data obtained in laser anemometry studies with a photon correlator, are presented both with and without regularization. © 1983 Taylor & Francis Ltd.
Resumo:
Image inpainting refers to restoring a damaged image with missing information. The total variation (TV) inpainting model is one such method that simultaneously fills in the regions with available information from their surroundings and eliminates noises. The method works well with small narrow inpainting domains. However there remains an urgent need to develop fast iterative solvers, as the underlying problem sizes are large. In addition one needs to tackle the imbalance of results between inpainting and denoising. When the inpainting regions are thick and large, the procedure of inpainting works quite slowly and usually requires a significant number of iterations and leads inevitably to oversmoothing in the outside of the inpainting domain. To overcome these difficulties, we propose a solution for TV inpainting method based on the nonlinear multi-grid algorithm.
Resumo:
Abstract To achieve higher flexibility and to better satisfy actual customer requirements, there is an increasing tendency to develop and deliver software in an incremental fashion. In adopting this process, requirements are delivered in releases and so a decision has to be made on which requirements should be delivered in which release. Three main considerations that need to be taken account of are the technical precedences inherent in the requirements, the typically conflicting priorities as determined by the representative stakeholders, as well as the balance between required and available effort. The technical precedence constraints relate to situations where one requirement cannot be implemented until another is completed or where one requirement is implemented in the same increment as another one. Stakeholder preferences may be based on the perceived value or urgency of delivered requirements to the different stakeholders involved. The technical priorities and individual stakeholder priorities may be in conflict and difficult to reconcile. This paper provides (i) a method for optimally allocating requirements to increments; (ii) a means of assessing and optimizing the degree to which the ordering conflicts with stakeholder priorities within technical precedence constraints; (iii) a means of balancing required and available resources for all increments; and (iv) an overall method called EVOLVE aimed at the continuous planning of incremental software development. The optimization method used is iterative and essentially based on a genetic algorithm. A set of the most promising candidate solutions is generated to support the final decision. The paper evaluates the proposed approach using a sample project.
Resumo:
This paper considers a Q-ary orthogonal direct-sequence code-division multiple-access (DS-CDMA) system with high-rate space-time linear dispersion codes (LDCs) in time-varying Rayleigh fading multiple-input-multiple-output (MIMO) channels. We propose a joint multiuser detection, LDC decoding, Q-ary demodulation, and channel-decoding algorithm and apply the turbo processing principle to improve system performance in an iterative fashion. The proposed iterative scheme demonstrates faster convergence and superior performance compared with the V-BLAST-based DS-CDMA system and is shown to approach the single-user performance bound. We also show that the CDMA system is able to exploit the time diversity offered by the LDCS in rapid-fading channels.
Resumo:
A pipelined array multiplier which has been derived by applying 'systolic array' principles at the bit level is described. Attention is focused on a circuit which is used to multiply streams of parallel unsigned data. Then an algorithm is given which demonstrates that, with only a simple modification to the basic cell, the same array can cope with two's complement numbers. The resulting structure has a number of features whch make it attractive to LSI and VLSI. These include regularity and modularity.
Resumo:
The iterative nature of turbo-decoding algorithms increases their complexity compare to conventional FEC decoding algorithms. Two iterative decoding algorithms, Soft-Output-Viterbi Algorithm (SOVA) and Maximum A posteriori Probability (MAP) Algorithm require complex decoding operations over several iteration cycles. So, for real-time implementation of turbo codes, reducing the decoder complexity while preserving bit-error-rate (BER) performance is an important design consideration. In this chapter, a modification to the Max-Log-MAP algorithm is presented. This modification is to scale the extrinsic information exchange between the constituent decoders. The remainder of this chapter is organized as follows: An overview of the turbo encoding and decoding processes, the MAP algorithm and its simplified versions the Log-MAP and Max-Log-MAP algorithms are presented in section 1. The extrinsic information scaling is introduced, simulation results are presented, and the performance of different methods to choose the best scaling factor is discussed in Section 2. Section 3 discusses trends and applications of turbo coding from the perspective of wireless applications.
Resumo:
Thesis (Master's)--University of Washington, 2015
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based on a novel RPS algorithm. This design uses n single digit multipliers for an n × n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.
Resumo:
In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.
Resumo:
Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
An algorithm is presented for the generation of molecular models of defective graphene fragments, containing a majority of 6-membered rings with a small number of 5- and 7-membered rings as defects. The structures are generated from an initial random array of points in 2D space, which are then subject to Delaunay triangulation. The dual of the triangulation forms a Voronoi tessellation of polygons with a range of ring sizes. An iterative cycle of refinement, involving deletion and addition of points followed by further triangulation, is performed until the user-defined criteria for the number of defects are met. The array of points and connectivities are then converted to a molecular structure and subject to geometry optimization using a standard molecular modeling package to generate final atomic coordinates. On the basis of molecular mechanics with minimization, this automated method can generate structures, which conform to user-supplied criteria and avoid the potential bias associated with the manual building of structures. One application of the algorithm is the generation of structures for the evaluation of the reactivity of different defect sites. Ab initio electronic structure calculations on a representative structure indicate preferential fluorination close to 5-ring defects.
Resumo:
We have favoured the variational (secular equation) method for the determination of the (ro-) vibrational energy levels of polyatomic molecules. We use predominantly the Watson Hamiltonian in normal coordinates and an associated given potential in the variational code 'Multimode'. The dominant cost is the construction and diagonalization of matrices of ever-increasing size. Here we address this problem, using pertubation theory to select dominant expansion terms within the Davidson-Liu iterative diagonalization method. Our chosen example is the twelve-mode molecule methanol, for which we have an ab initio representation of the potential which includes the internal rotational motion of the OH group relative to CH3. Our new algorithm allows us to obtain converged energy levels for matrices of dimensions in excess of 100 000.