884 resultados para Problem analysis
Resumo:
Trichophyton rubrum is the most common pathogen causing dermatophytosis. Molecular strain-typing methods have recently been developed to tackle epidemiological questions and the problem of relapse following treatment. A total of 67 strains of T rubrum were screened for genetic variation by randomly amplified polymorphic DNA (RAPD) analysis, with two primers, 5'-d[GGTGCGGGAA]-3' and 5'-d[CCCGTCAGCA]-3', as well as by subrepeat element analysis of the nontranscribed spacer of rDNA, using the repetitive subelements TRS-1 and TRS-2. A total of 12 individual patterns were recognized with the first primer and 11 with the second. Phylogenetic analysis of the RAPID products showed a high degree of similarity (> 90 %) among the epidemiologically related clinical isolates, while the other strains possessed 60% similarity. Specific amplification of TRS-1 produced three strain-characteristic banding patterns (PCR types); simple patterns representing one copy of TRS-1 and two copies of TRS-2 accounted for around 85 % of all isolates. It is concluded that molecular analysis has important implications for epidemiological studies, and RAPID analysis is especially suitable for molecular typing in T. rubrum.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this work, are discussed two formulations of the boundary element method - BEM to perform linear bending analysis of plates reinforced by beams. Both formulations are based on the Kirchhoffs hypothesis and they are obtained from the reciprocity theorem applied to zoned plates, where each sub-region defines a beam or a stab. In the first model the problem values are defined along the interfaces and the external boundary. Then, in order to reduce the number of degrees of freedom kinematics hypothesis are assumed along the beam cross section, leading to a second formulation where the collocation points are defined along the beam skeleton, instead of being placed on interfaces. on these formulations no approximation of the generalized forces along the interface is required. Moreover, compatibility and equilibrium conditions along the interface are automatically imposed by the integral equation. Thus, these formulations require less approximation and the total number of the degrees of freedom is reduced. In the numerical examples are discussed the differences between these two BEM formulations, comparing as well the results to a well-known finite element code.
Resumo:
The misfit between prostheses and implants is a clinical reality, but the level that can be accepted without causing mechanical or biologic problem is not well defined. This study investigates the effect of different levels of unilateral angular misfit prostheses in the prosthesis/implant/retaining screw system and in the surrounding bone using finite element analysis. Four models of a two-dimensional finite element were constructed: group I (control), prosthesis that fit the implant; groups 2 to 4, prostheses with unilateral angular misfit of 50, 100, and 200 mu m, respectively. A load of 133 N was applied with a 30-degree angulation and off-axis at 2 mm from the long axis of the implant at the opposite direction of misfit on the models. Taking into account the increase of the angular misfit, the stress maps showed a gradual increase of prosthesis stress and uniform stress in the implant and trabecular bone. Concerning the displacement, an inclination of the system due to loading and misfit was observed. The decrease of the unilateral contact between prosthesis and implant leads to the displacement of the entire system, and distribution and magnitude alterations of the stress also occurred.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We study the use of para-orthogonal polynomials in solving the frequency analysis problem. Through a transformation of Delsarte and Genin, we present an approach for the frequency analysis by using the zeros and Christoffel numbers of polynomials orthogonal on the real line. This leads to a simple and fast algorithm for the estimation of frequencies. We also provide a new method, faster than the Levinson algorithm, for the determination of the reflection coefficients of the corresponding real Szego polynomials from the given moments.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Statement of problem. The use of ultrasonic tips has become an alternative for cavity preparation. However, there are concerns about this type of device, particularly with respect to intrapulpal temperatures and cavity preparation time.Purpose. The purpose of this study was to analyze pulpal temperature increases generated by an ultrasonic cavity preparation with chemical vapor deposition (CVD) tips, in comparison to preparation with a high-speed handpiece with a diamond rotary cutting instrument. The time required to complete the cavity preparation with each system was also evaluated.Material and methods. Thermocouples were positioned in the pulp chamber of 20 extracted human third molars. Slot-type cavities (3 x 3 x 2 mm) were prepared on the buccal and the lingual surfaces of each tooth. The test groups were: high-speed cavity preparation with diamond rotary cutting instruments (n = 20) and ultrasonic cavity preparation with CVD points (n = 20). During cavity preparation, the increases In pulpal temperature, and the time required for the preparation, were recorded and analyzed by Student's t test for paired samples (alpha = .05).Results. The average pulpal temperature increases were 4.3 degrees C for the high-speed preparation and 3.8 degrees C for the ultrasonic preparation, which were statistically similar (P = .052). However, significant differences were found (P < .001) for the time expended (3.3 minutes for the high-speed bur and 13.77 minutes for the ultrasound device).Conclusions. The intrapulpal temperatures produced during cavity preparation by ultrasonic tips versus high-speed bur preparation were similar. However, the use of the ultrasonic device required 4 times longer for the completion of a cavity preparation.
Resumo:
What can we learn from solar neutrino observations? Is there any solution to the solar neutrino anomaly which is favored by the present experimental panorama? After SNO results, is it possible to affirm that neutrinos have mass? In order to answer such questions we analyze the current available data from the solar neutrino experiments, including the recent SNO result, in view of many acceptable solutions to the solar neutrino problem based on different conversion mechanisms, for the first time using the same statistical procedure. This allows us to do a direct comparison of the goodness of the fit among different solutions, from which we can discuss and conclude on the current status of each proposed dynamical mechanism. These solutions are based on different assumptions: (a) neutrino mass and mixing, (b) a nonvanishing neutrino magnetic moment, (c) the existence of nonstandard flavor-changing and nonuniversal neutrino interactions, and (d) a tiny violation of the equivalence principle. We investigate the quality of the fit provided by each one of these solutions not only to the total rate measured by all the solar neutrino experiments but also to the recoil electron energy spectrum measured at different zenith angles by the Super-Kamiokande Collaboration. We conclude that several nonstandard neutrino flavor conversion mechanisms provide a very good fit to the experimental data which is comparable with (or even slightly better than) the most famous solution to the solar neutrino anomaly based on the neutrino oscillation induced by mass.
Resumo:
The goal of this work is to assess the efficacy of texture measures for estimating levels of crowd densities ill images. This estimation is crucial for the problem of crowd monitoring. and control. The assessment is carried out oil a set of nearly 300 real images captured from Liverpool Street Train Station. London, UK using texture measures extracted from the images through the following four different methods: gray level dependence matrices, straight lille segments. Fourier analysis. and fractal dimensions. The estimations of dowel densities are given in terms of the classification of the input images ill five classes of densities (very low, low. moderate. high and very high). Three types of classifiers are used: neural (implemented according to the Kohonen model). Bayesian. and an approach based on fitting functions. The results obtained by these three classifiers. using the four texture measures. allowed the conclusion that, for the problem of crowd density estimation. texture analysis is very effective.
Resumo:
The application of adsorptive stripping potentiometry to the reductive detection of nucleic acids at mercury electrodes is reported. Compared to analogous voltammetric stripping modes, constant current potentiometric stripping analysis (PSA) effectively addresses the hydrogen discharge background problem, and hence greatly improves the characteristics of the superimposed cytosine/adenine (CA) reduction peak. Compared to earlier schemes for trace measurements of nucleic acids at mercury or carbon electrodes that rely on anodic signals arising from the guanine residue, convenient quantitation can now be carried out in connection with the cytosine and adenine residues. Variables influencing the adsorptive PSA response are explored and optimized. With five minute accumulation, the detection limits for tRNA, ssDNA and dsDNA are 30 mu g l(-1), 60 mu g l(-1) and 2 mg l(-1), respectively. Such different values reflect the strong dependence of the PSA CA signal upon the nucleic-acid structure. This allows the quantitation of ssDNA or tRNA in the presence of dsDNA, and offers new possibilities for electrochemical studies of DNA structure and interactions.
Resumo:
In a previous work, Vieira Neto & Winter (2001) numerically explored the capture times of particles as temporary satellites of Uranus. The study was made in the framework of the spatial, circular, restricted three-body problem. Regions of the initial condition space whose trajectories are apparently stable were determined. The criterion adopted was that the trajectories do not escape from the planet during an integration of 10(5) years. These regions occur for a wide range of orbital initial inclinations (i). In the present work it is studied the reason for the existence of such stable regions. The stability of the planar retrograde trajectories is due to a family of simple periodic orbits and the associated quasi-periodic orbits that oscillate around them. These planar stable orbits had already been studied (Henon 1970; Huang & Innanen 1983). Their results are reviewed using Poincare surface of sections. The stable non-planar retrograde trajectories, 110 degrees less than or equal to i < 180
Resumo:
In conformational analysis, the systematic search method completely maps the space but suffers from the combinatorial explosion problem because the number of conformations increases exponentially with the number of free rotation angles. This study introduces a new methodology of conformational analysis that controls the combinatorial explosion. It is based on a dimensional reduction of the system through the use of principal component analysis. The results are exactly the same as those obtained for the complete search but, in this case, the number of conformations increases only quadratically with the number of free rotation angles. The method is applied to a series of three drugs: omeprazole. pantoprazole, lansoprazole-benzimidazoles that suppress gastric-acid secretion by means of H(+), K(+)-ATPase enzyme inhibition. (C) 2002 John Wiley Sons. Inc.
Resumo:
Aggregation disaggregation is used to reduce the analysis of a large generalized transportation problem to a smaller one. Bounds for the actual difference between the aggregated objective and the original optimal value are used to quantify the error due to aggregation and estimate the quality of the aggregation. The bounds can be calculated either before optimization of the aggregated problem (a priori) or after (a posteriori). Both types of the bounds are derived and numerically compared. A computational experiment was designed to (a) study the correlation between the bounds and the actual error and (b) quantify the difference of the error bounds from the actual error. The experiment shows a significant correlation between some a priori bounds, the a posteriori bounds and the actual error. These preliminary results indicate that calculating the a priori error bound is a useful strategy to select the appropriate aggregation level, since the a priori bound varies in the same way that the actual error does. After the aggregated problem has been selected and optimized, the a posteriori bound provides a good quantitative measure for the error due to aggregation.
Resumo:
Economic Dispatch (ED) problems have recently been solved by artificial neural networks approaches. In most of these dispatch models, the cost function must be linear or quadratic. Therefore, functions that have several minimum points represent a problem to the simulation since these approaches have not accepted nonlinear cost function. Another drawback pointed out in the literature is that some of these neural approaches fail to converge efficiently towards feasible equilibrium points. This paper discusses the application of a modified Hopfield architecture for solving ED problems defined by nonlinear cost function. The internal parameters of the neural network adopted here are computed using the valid-subspace technique, which guarantees convergence to equilibrium points that represent a solution for the ED problem. Simulation results and a comparative analysis involving a 3-bus test system are presented to illustrate efficiency of the proposed approach.