59 resultados para Images - Computational methods

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human leukocyte antigen (HLA) haplotypes are frequently evaluated for population history inferences and association studies. However, the available typing techniques for the main HLA loci usually do not allow the determination of the allele phase and the constitution of a haplotype, which may be obtained by a very time-consuming and expensive family-based segregation study. Without the family-based study, computational inference by probabilistic models is necessary to obtain haplotypes. Several authors have used the expectation-maximization (EM) algorithm to determine HLA haplotypes, but high levels of erroneous inferences are expected because of the genetic distance among the main HLA loci and the presence of several recombination hotspots. In order to evaluate the efficiency of computational inference methods, 763 unrelated individuals stratified into three different datasets had their haplotypes manually defined in a family-based study of HLA-A, -B, -DRB1 and -DQB1 segregation, and these haplotypes were compared with the data obtained by the following three methods: the Expectation-Maximization (EM) and Excoffier-Laval-Balding (ELB) algorithms using the arlequin 3.11 software, and the PHASE method. When comparing the methods, we observed that all algorithms showed a poor performance for haplotype reconstruction with distant loci, estimating incorrect haplotypes for 38%-57% of the samples considering all algorithms and datasets. We suggest that computational haplotype inferences involving low-resolution HLA-A, HLA-B, HLA-DRB1 and HLA-DQB1 haplotypes should be considered with caution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational methods for the calculation of dynamical properties of fluids might consider the system as a continuum or as an assembly of molecules. Molecular dynamics (MD) simulation includes molecular resolution, whereas computational fluid dynamics (CFD) considers the fluid as a continuum. This work provides a review of hybrid methods MD/CFD recently proposed in the literature. Theoretical foundations, basic approaches of computational methods, and dynamical properties typically calculated by MD and CFD are first presented in order to appreciate the similarities and differences between these two methods. Then, methods for coupling MD and CFD, and applications of hybrid simulations MD/CFD, are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Generalized Finite Element Method (GFEM) is employed in this paper for the numerical analysis of three-dimensional solids tinder nonlinear behavior. A brief summary of the GFEM as well as a description of the formulation of the hexahedral element based oil the proposed enrichment strategy are initially presented. Next, in order to introduce the nonlinear analysis of solids, two constitutive models are briefly reviewed: Lemaitre`s model, in which damage and plasticity are coupled, and Mazars`s damage model suitable for concrete tinder increased loading. Both models are employed in the framework of a nonlocal approach to ensure solution objectivity. In the numerical analyses carried out, a selective enrichment of approximation at regions of concern in the domain (mainly those with high strain and damage gradients) is exploited. Such a possibility makes the three-dimensional analysis less expensive and practicable since re-meshing resources, characteristic of h-adaptivity, can be minimized. Moreover, a combination of three-dimensional analysis and the selective enrichment presents a valuable good tool for a better description of both damage and plastic strain scatterings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Highly ordered A-B-A block copolymer arrangements in the submicrometric scale, resulting from dewetting and solvent evaporation of thin films, have inspired a variety of new applications in the nanometric world. Despite the progress observed in the control of such structures, the intricate scientific phenomena related to regular patterns formation are still not completely elucidated. SEBS is a standard example of a triblock copolymer that forms spontaneously impressive pattern arrangements. From macroscopic thin liquid films of SEBS solution, several physical effects and phenomena act synergistically to achieve well-arranged patterns of stripes and/or droplets. That is, concomitant with dewetting, solvent evaporation, and Marangoni effect, Rayleigh instability and phase separation also play important role in the pattern formation. These two last effects are difficult to be followed experimentally in the nanoscale, which render difficulties to the comprehension of the whole phenomenon. In this paper, we use computational methods for image analysis, which provide quantitative morphometric data of the patterns, specifically comprising stripes fragmentation into droplets. With the help of these computational techniques, we developed an explanation for the final part of the pattern formation, i.e. structural dynamics related to the stripes fragmentation. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study on the use of artificial intelligence (AI) techniques for the modelling and subsequent control of an electric resistance spot welding process (ERSW) is presented. The ERSW process is characterized by the coupling of thermal, electrical, mechanical, and metallurgical phenomena. For this reason, early attempts to model it using computational methods established as the methods of finite differences, finite element, and finite volumes, ask for simplifications that lead the model obtained far from reality or very costly in terms of computational costs, to be used in a real-time control system. In this sense, the authors have developed an ERSW controller that uses fuzzy logic to adjust the energy transferred to the weld nugget. The proposed control strategies differ in the speed with which it reaches convergence. Moreover, their application for a quality control of spot weld through artificial neural networks (ANN) is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the last decades, anti-resonant reflecting optical waveguides (ARROW) have been used in different integrated optics applications. In this type of waveguide, light confinement is partially achieved through an anti-resonant reflection. In this work, the simulation, fabrication and characterization of ARROW waveguides using dielectric films deposited by a plasma-enhanced chemical vapor deposition (PECVD) technique, at low temperatures(similar to 300 degrees C), are presented. Silicon oxynitride (SiO(x)N(y)) films were used as core and second cladding layers and amorphous hydrogenated silicon carbide(a-SiC:H) films as first cladding layer. Furthermore, numerical simulations were performed using homemade routines based on two computational methods: the transfer matrix method (TMM) for the determination of the optimum thickness of the Fabry-Perot layers; and the non-uniform finite difference method (NU-FDM) for 2D design and determination of the maximum width that yields single-mode operation. The utilization of a silicon carbide anti-resonant layer resulted in low optical attenuations, which is due to the high refractive index difference between the core and this layer. Finally, for comparison purposes, optical waveguides using titanium oxide (TiO(2)) as the first ARROW layer were also fabricated and characterized.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To evaluate the checkerboard DNA-DNA hybridization method for detection and quantitation of bacteria from the internal parts of dental implants and to compare bacterial leakage from implants connected either to cast or to pre-machined abutments. Nine plastic abutments cast in a Ni-Cr alloy and nine pre-machined Co-Cr alloy abutments with plastic sleeves cast in Ni-Cr were connected to Branemark-compatible implants. A group of nine implants was used as control. The implants were inoculated with 3 mu l of a solution containing 10(8) cells/ml of Streptococcus sobrinus. Bacterial samples were immediately collected from the control implants while assemblies were completely immersed in 5 ml of sterile Tripty Soy Broth (TSB) medium. After 14 days of anaerobic incubation, occurrence of leakage at the implant-abutment interface was evaluated by assessing contamination of the TSB medium. Internal contamination of the implants was evaluated with the checkerboard DNA-DNA hybridization method. DNA-DNA hybridization was sensitive enough to detect and quantify the microorganism from the internal parts of the implants. No differences in leakage and in internal contamination were found between cast and pre-machined abutments. Bacterial scores in the control group were significantly higher than in the other groups (P < 0.05). Bacterial leakage through the implant-abutment interface does not significantly differ when cast or pre-machined abutments are used. The checkerboard DNA-DNA hybridization technique is suitable for the evaluation of the internal contamination of dental implants although further studies are necessary to validate the use of computational methods for the improvement of the test accuracy. To cite this article:do Nascimento C, Barbosa RES, Issa JPM, Watanabe E, Ito IY, Albuquerque Junior RF. Use of checkerboard DNA-DNA hybridization to evaluate the internal contamination of dental implants and comparison of bacterial leakage with cast or pre-machined abutments.Clin. Oral Impl. Res. 20, 2009; 571-577.doi: 10.1111/j.1600-0501.2008.01663.x.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the possibility of interpreting the degeneracy of the genetic code, i.e., the feature that different codons (base triplets) of DNA are transcribed into the same amino acid, as the result of a symmetry breaking process, in the context of finite groups. In the first part of this paper, we give the complete list of all codon representations (64-dimensional irreducible representations) of simple finite groups and their satellites (central extensions and extensions by outer automorphisms). In the second part, we analyze the branching rules for the codon representations found in the first part by computational methods, using a software package for computational group theory. The final result is a complete classification of the possible schemes, based on finite simple groups, that reproduce the multiplet structure of the genetic code. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We simplify the results of Bremner and Hentzel [J. Algebra 231 (2000) 387-405] on polynomial identities of degree 9 in two variables satisfied by the ternary cyclic sum [a, b, c] abc + bca + cab in every totally associative ternary algebra. We also obtain new identities of degree 9 in three variables which do not follow from the identities in two variables. Our results depend on (i) the LLL algorithm for lattice basis reduction, and (ii) linearization operators in the group algebra of the symmetric group which permit efficient computation of the representation matrices for a non-linear identity. Our computational methods can be applied to polynomial identities for other algebraic structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We evaluated the performance of a novel procedure for segmenting mammograms and detecting clustered microcalcifications in two types of image sets obtained from digitization of mammograms using either a laser scanner, or a conventional ""optical"" scanner. Specific regions forming the digital mammograms were identified and selected, in which clustered microcalcifications appeared or not. A remarkable increase in image intensity was noticed in the images from the optical scanner compared with the original mammograms. A procedure based on a polynomial correction was developed to compensate the changes in the characteristic curves from the scanners, relative to the curves from the films. The processing scheme was applied to both sets, before and after the polynomial correction. The results indicated clearly the influence of the mammogram digitization on the performance of processing schemes intended to detect microcalcifications. The image processing techniques applied to mammograms digitized by both scanners, without the polynomial intensity correction, resulted in a better sensibility in detecting microcalcifications in the images from the laser scanner. However, when the polynomial correction was applied to the images from the optical scanner, no differences in performance were observed for both types of images. (C) 2008 SPIE and IS&T [DOI: 10.1117/1.3013544]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article is dedicated to harmonic wavelet Galerkin methods for the solution of partial differential equations. Several variants of the method are proposed and analyzed, using the Burgers equation as a test model. The computational complexity can be reduced when the localization properties of the wavelets and restricted interactions between different scales are exploited. The resulting variants of the method have computational complexities ranging from O(N(3)) to O(N) (N being the space dimension) per time step. A pseudo-spectral wavelet scheme is also described and compared to the methods based on connection coefficients. The harmonic wavelet Galerkin scheme is applied to a nonlinear model for the propagation of precipitation fronts, with the front locations being exposed in the sizes of the localized wavelet coefficients. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of the mandibular canal (MC) is an important prerequisite for surgical procedures involving the posterior mandible. Cone beam computed tomography (CBCT) represents an advance in imaging technology, but distinguishing the MC from surrounding structures may remain a delicate task. OBJECTIVES: The aim of this study was to assess the visibility of the MC in different regions on CBCT cross-sectional images. MATERIAL AND METHODS: CBCT cross-sectional images of 58 patients (116 hemi-mandibles) were analyzed, and the visibility of the MC in different regions was assessed. RESULTS: The MC was clearly visible in 53% of the hemi-mandibles. Difficult and very difficult visualizations were registered in 25% and 22% of the hemi-mandibles, respectively. The visibility of the MC on distal regions was superior when compared to regions closer to the mental foramen. No differences were found between edentulous and tooth-bearing areas. CONCLUSIONS: The MC presents an overall satisfactory visibility on CBCT cross-sectional images in most cases. However, the discrimination of the canal from its surrounds becomes less obvious towards the mental foramen region when cross-sectional images are individually analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI) and fluorescence camera (FC) to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.