980 resultados para Newton iteration
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
Quasi-Newton-Raphson minimization and conjugate gradient minimization have been used to solve the crystal structures of famotidine form B and capsaicin from X-ray powder diffraction data and characterize the chi(2) agreement surfaces. One million quasi-Newton-Raphson minimizations found the famotidine global minimum with a frequency of ca 1 in 5000 and the capsaicin global minimum with a frequency of ca 1 in 10 000. These results, which are corroborated by conjugate gradient minimization, demonstrate the existence of numerous pathways from some of the highest points on these chi(2) agreement surfaces to the respective global minima, which are passable using only downhill moves. This important observation has significant ramifications for the development of improved structure determination algorithms.
Resumo:
When formulating least-cost poultry diets, ME concentration should be optimised by an iterative procedure, not entered as a fixed value. This iteration must calculate profit margins by taking into account the way in which feed intake and saleable outputs vary with ME concentration. In the case of broilers, adjustment of critical amino acid contents in direct proportion to ME concentration does not result in birds of equal fatness. To avoid an increase in fat deposition at higher energy levels, it is proposed that amino acid specifications should be adjusted in proportion to changes in the net energy supplied by the feed. A model is available which will both interpret responses to amino acids in laying trials and give economically optimal estimates of amino acid inputs for practical feed formulation. Flocks coming into lay and flocks nearing the end of the pullet year have bimodal distributions of rates of lay, with the result that calculations of requirement based on mean output will underestimate the optimal amino acid input for the flock. Chick diets containing surplus protein can lead to impaired utilisation of the first-limiting amino acid. This difficulty can be avoided by stating amino acid requirements as a proportion of the protein.
Resumo:
A chemically coated piezoelectric sensor has been developed for the determination of PAHs in the liquid phase. An organic monolayer attached to the surface of a gold electrode of a quartz crystal microbalance (QCM) via a covalent thiol-gold link complete with an ionically bound recognition element has been produced. This study has employed the PAH derivative 9-anthracene carboxylic acid which, once bound to the alkane thiol, functions as the recognition element. Binding of anthracene via pi-pi interaction has been observed as a frequency shift in the QCM with a detectability of the target analyte of 2 ppb and a response range of 0-50 ppb. The relative response of the sensor altered for different PAHs despite pi-pi interaction being the sole communication between recognition element and analyte. It is envisaged that such a sensor could be employed in the identification of key marker compounds and, as such, give an indication of total PAH flux in the environment.
Resumo:
An acoustic wave sensor coated with an artificial biomimetic recognition element has been developed to selectively detect the amino acid L-serine. A highly specific non-covalently imprinted polymer was cast on one electrode of a quartz crystal microbalance (QCM) as a thin permeable film. Selective rebinding of the L-serine was observed as a frequency shift in the QCM with a detection limit of 2 ppb and for concentrations up to 0.4 ppm. The sensor binding is shown to be capable of discrimination between L- and D-stereoisomers of serine as a result of the enantioselectivity of the imprinted binding sites. (C) 2002 Elsevier Science B.V. All rights reserved.
The management of site management training: The national construction college site management course
Resumo:
Finding the smallest eigenvalue of a given square matrix A of order n is computationally very intensive problem. The most popular method for this problem is the Inverse Power Method which uses LU-decomposition and forward and backward solving of the factored system at every iteration step. An alternative to this method is the Resolvent Monte Carlo method which uses representation of the resolvent matrix [I -qA](-m) as a series and then performs Monte Carlo iterations (random walks) on the elements of the matrix. This leads to great savings in computations, but the method has many restrictions and a very slow convergence. In this paper we propose a method that includes fast Monte Carlo procedure for finding the inverse matrix, refinement procedure to improve approximation of the inverse if necessary, and Monte Carlo power iterations to compute the smallest eigenvalue. We provide not only theoretical estimations about accuracy and convergence but also results from numerical tests performed on a number of test matrices.
Resumo:
A Fractal Quantizer is proposed that replaces the expensive division operation for the computation of scalar quantization by more modest and available multiplication, addition and shift operations. Although the proposed method is iterative in nature, simulations prove a virtually undetectable distortion to the naked eve for JPEG compressed images using a single iteration. The method requires a change to the usual tables used in JPEG algorithins but of similar size. For practical purposes, performing quantization is reduced to a multiplication plus addition operation easily programmed in either low-end embedded processors and suitable for efficient and very high speed implementation in ASIC or FPGA hardware. FPGA hardware implementation shows up to x15 area-time savingscompared to standars solutions for devices with dedicated multipliers. The method can be also immediately extended to perform adaptive quantization(1).
Resumo:
This paper presents a new face verification algorithm based on Gabor wavelets and AdaBoost. In the algorithm, faces are represented by Gabor wavelet features generated by Gabor wavelet transform. Gabor wavelets with 5 scales and 8 orientations are chosen to form a family of Gabor wavelets. By convolving face images with these 40 Gabor wavelets, the original images are transformed into magnitude response images of Gabor wavelet features. The AdaBoost algorithm selects a small set of significant features from the pool of the Gabor wavelet features. Each feature is the basis for a weak classifier which is trained with face images taken from the XM2VTS database. The feature with the lowest classification error is selected in each iteration of the AdaBoost operation. We also address issues regarding computational costs in feature selection with AdaBoost. A support vector machine (SVM) is trained with examples of 20 features, and the results have shown a low false positive rate and a low classification error rate in face verification.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.