16 resultados para Images - Computational methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

To evaluate the checkerboard DNA-DNA hybridization method for detection and quantitation of bacteria from the internal parts of dental implants and to compare bacterial leakage from implants connected either to cast or to pre-machined abutments. Nine plastic abutments cast in a Ni-Cr alloy and nine pre-machined Co-Cr alloy abutments with plastic sleeves cast in Ni-Cr were connected to Branemark-compatible implants. A group of nine implants was used as control. The implants were inoculated with 3 mu l of a solution containing 10(8) cells/ml of Streptococcus sobrinus. Bacterial samples were immediately collected from the control implants while assemblies were completely immersed in 5 ml of sterile Tripty Soy Broth (TSB) medium. After 14 days of anaerobic incubation, occurrence of leakage at the implant-abutment interface was evaluated by assessing contamination of the TSB medium. Internal contamination of the implants was evaluated with the checkerboard DNA-DNA hybridization method. DNA-DNA hybridization was sensitive enough to detect and quantify the microorganism from the internal parts of the implants. No differences in leakage and in internal contamination were found between cast and pre-machined abutments. Bacterial scores in the control group were significantly higher than in the other groups (P < 0.05). Bacterial leakage through the implant-abutment interface does not significantly differ when cast or pre-machined abutments are used. The checkerboard DNA-DNA hybridization technique is suitable for the evaluation of the internal contamination of dental implants although further studies are necessary to validate the use of computational methods for the improvement of the test accuracy. To cite this article:do Nascimento C, Barbosa RES, Issa JPM, Watanabe E, Ito IY, Albuquerque Junior RF. Use of checkerboard DNA-DNA hybridization to evaluate the internal contamination of dental implants and comparison of bacterial leakage with cast or pre-machined abutments.Clin. Oral Impl. Res. 20, 2009; 571-577.doi: 10.1111/j.1600-0501.2008.01663.x.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the possibility of interpreting the degeneracy of the genetic code, i.e., the feature that different codons (base triplets) of DNA are transcribed into the same amino acid, as the result of a symmetry breaking process, in the context of finite groups. In the first part of this paper, we give the complete list of all codon representations (64-dimensional irreducible representations) of simple finite groups and their satellites (central extensions and extensions by outer automorphisms). In the second part, we analyze the branching rules for the codon representations found in the first part by computational methods, using a software package for computational group theory. The final result is a complete classification of the possible schemes, based on finite simple groups, that reproduce the multiplet structure of the genetic code. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We simplify the results of Bremner and Hentzel [J. Algebra 231 (2000) 387-405] on polynomial identities of degree 9 in two variables satisfied by the ternary cyclic sum [a, b, c] abc + bca + cab in every totally associative ternary algebra. We also obtain new identities of degree 9 in three variables which do not follow from the identities in two variables. Our results depend on (i) the LLL algorithm for lattice basis reduction, and (ii) linearization operators in the group algebra of the symmetric group which permit efficient computation of the representation matrices for a non-linear identity. Our computational methods can be applied to polynomial identities for other algebraic structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article is dedicated to harmonic wavelet Galerkin methods for the solution of partial differential equations. Several variants of the method are proposed and analyzed, using the Burgers equation as a test model. The computational complexity can be reduced when the localization properties of the wavelets and restricted interactions between different scales are exploited. The resulting variants of the method have computational complexities ranging from O(N(3)) to O(N) (N being the space dimension) per time step. A pseudo-spectral wavelet scheme is also described and compared to the methods based on connection coefficients. The harmonic wavelet Galerkin scheme is applied to a nonlinear model for the propagation of precipitation fronts, with the front locations being exposed in the sizes of the localized wavelet coefficients. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Techniques devoted to generating triangular meshes from intensity images either take as input a segmented image or generate a mesh without distinguishing individual structures contained in the image. These facts may cause difficulties in using such techniques in some applications, such as numerical simulations. In this work we reformulate a previously developed technique for mesh generation from intensity images called Imesh. This reformulation makes Imesh more versatile due to an unified framework that allows an easy change of refinement metric, rendering it effective for constructing meshes for applications with varied requirements, such as numerical simulation and image modeling. Furthermore, a deeper study about the point insertion problem and the development of geometrical criterion for segmentation is also reported in this paper. Meshes with theoretical guarantee of quality can also be obtained for each individual image structure as a post-processing step, a characteristic not usually found in other methods. The tests demonstrate the flexibility and the effectiveness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article discusses methods to identify plants by analysing leaf complexity based on estimating their fractal dimension. Leaves were analyzed according to the complexity of their internal and external shapes. A computational program was developed to process, analyze and extract the features of leaf images, thereby allowing for automatic plant identification. Results are presented from two experiments, the first to identify plant species from the Brazilian Atlantic forest and Brazilian Cerrado scrublands, using fifty leaf samples from ten different species, and the second to identify four different species from genus Passiflora, using twenty leaf samples for each class. A comparison is made of two methods to estimate fractal dimension (box-counting and multiscale Minkowski). The results are discussed to determine the best approach to analyze shape complexity based on the performance of the technique, when estimating fractal dimension and identifying plants. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that leads to fully smooth subproblems. We first enhance the existing theory to show that our approach is globally convergent in both the primal and dual spaces when applied to convex problems. We then present an extensive computational evaluation using the CUTE test set, showing that some aspects of our approach are promising, but some are not. These conclusions in turn lead to additional computational experiments suggesting where to next focus our theoretical and computational efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new framework for generating triangular meshes from textured color images. The proposed framework combines a texture classification technique, called W-operator, with Imesh, a method originally conceived to generate simplicial meshes from gray scale images. An extension of W-operators to handle textured color images is proposed, which employs a combination of RGB and HSV channels and Sequential Floating Forward Search guided by mean conditional entropy criterion to extract features from the training data. The W-operator is built into the local error estimation used by Imesh to choose the mesh vertices. Furthermore, the W-operator also enables to assign a label to the triangles during the mesh construction, thus allowing to obtain a segmented mesh at the end of the process. The presented results show that the combination of W-operators with Imesh gives rise to a texture classification-based triangle mesh generation framework that outperforms pixel based methods. Crown Copyright (C) 2009 Published by Elsevier Inc. All rights reserved.