991 resultados para interdisciplinary methods
Resumo:
Objective. The goal of this paper is to undertake a literature search collecting all dentin bond strength data obtained for six adhesives with four tests ( shear, microshear, tensile and microtensile) and to critically analyze the results with respect to average bond strength, coefficient of variation, mode of failure and product ranking. Method. A PubMed search was carried out for the years between 1998 and 2009 identifying publications on bond strength measurements of resin composite to dentin using four tests: shear, tensile, microshear and microtensile. The six adhesive resins were selected covering three step systems ( OptiBond FL, Scotch Bond Multi-Purpose Plus), two-step (Prime & Bond NT, Single Bond, Clear. l SE Bond) and one step (Adper Prompt L Pop). Results. Pooling results from 147 references showed an ongoing high scatter in the bond strength data regardless which adhesive and which bond test was used. Coefficients of variation remained high (20-50%) even with the microbond test. The reported modes of failure for all tests still included high number of cohesive failures. The ranking seemed to be dependant on the test used. Significance. The scatter in dentin bond strength data remains regardless which test is used confirming Finite Element Analysis predicting non-uniform stress distributions due to a number of geometrical, loading, material properties and specimens preparation variables. This reopens the question whether, an interfacial fracture mechanics approach to analyze the dentin - adhesive bond is not more appropriate for obtaining better agreement among dentin bond related papers. (C) 2009 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this in vitro study was to evaluate alterations in the surface roughness and micromorphology of human enamel submitted to three prophylaxis methods. Sixty-nine caries-free molars with exposed labial surfaces were divided into three groups. Group I was treated with a rotary instrument set at a low speed, rubber clip and a mixture of water and pumice; group II with a rotary instrument set at a low speed, rubber cup and prophylaxis paste Herjos-F (Vigodent S/A Industria e Comercio, Rio de Janeiro, Brazil); and group III with sodium bicarbonate spray Profi II Ceramic (Dabi A dante Indtistrias Medico Odontologicas Ltda, Ribeirao Preto, Brazil). All procedures were performed by the same operator for 10 s, and samples were rinsed and stored in distilled water. Pre and post-treatment surface evaluation was completed using a surface profilometer (Perthometer S8P Marh, Perthen, Germany) in 54 samples. In addition, the other samples were coated with gold and examined in a scanning electron microscope (SEM). The results of this study were statistically analyzed with the paired t-test (Student), the Kruskal-Wallis test and the Dunn (5%) test. The sodium bicarbonate spray led to significantly rougher surfaces than the pumice paste. The use of prophylaxis paste showed no statistically significant difference when compared with the other methods. Based on SEM analysis, the sodium bicarbonate spray presented an irregular surface with granular material and erosions. Based on this study, it can be concluded that there was an increased enamel stuface roughness when teeth were treated with sodium bicarbonate spray when compared with teeth treated with pumice paste.
Resumo:
Purpose: The objective of the present study was to assess the prevalence of untreated caries in a Brazilian paediatric acquired immunodeficiency syndrome (AIDS) patient population and its association with sociodemographic, behavioural and clinical characteristics. Materials and Methods: The study group was comprised of 125 HIV-infected patients (aged 3 to 15 years) who had already manifested AIDS and were assisted in a specialised health care unit. Dental examinations followed the World Health Organization`s guidelines for oral health surveys. Family caregivers provided information about the socioeconomic standing and the behaviour of their children. Patients` medical records in the hospital provided information on the clinical status of patients. A Poisson regression analysis was used for assessing the covariates for the prevalence of untreated dental caries, as adjusted by age. Results: The prevalence of untreated caries was 58%; a higher prevalence was found in younger children with primary and mixed dentition. The prevalence of untreated caries associated significantly with lower socioeconomic status (household crowding and schooling of the caregiver), dietary habits (higher frequency of sugar consumption) and poorer clinical status (HIV viral load and symptom severity). Conclusions: The high burden of untreated caries on paediatric AIDS patients reinforced the importance of integrating the clinician with the interdisciplinary health care team that assisted these children. The identification of socioeconomic and behavioural factors associated with caries experience reinforced the importance of the attention that children with AIDS received within their own households for the prevention of dental disease, particularly a proper nutritional advisement and monitoring of dental hygiene.
Resumo:
Fluorides and chlorhexidine are technologies that are 65 and 40 years old, respectively. This overview argues that current methods of caries prevention are not effective for the high caries risk patient. In this review examples, arguments and recommendations are provided to address the high caries risk patient that include: failure of comprehensive chemical modalities treatments to address the high caries risk patient; ecological alteration - would this be an effective approach?; and biomaterials and oral microbiome research to address the high caries risk patient.
Resumo:
In this paper we discuss implicit Taylor methods for stiff Ito stochastic differential equations. Based on the relationship between Ito stochastic integrals and backward stochastic integrals, we introduce three implicit Taylor methods: the implicit Euler-Taylor method with strong order 0.5, the implicit Milstein-Taylor method with strong order 1.0 and the implicit Taylor method with strong order 1.5. The mean-square stability properties of the implicit Euler-Taylor and Milstein-Taylor methods are much better than those of the corresponding semi-implicit Euler and Milstein methods and these two implicit methods can be used to solve stochastic differential equations which are stiff in both the deterministic and the stochastic components. Numerical results are reported to show the convergence properties and the stability properties of these three implicit Taylor methods. The stability analysis and numerical results show that the implicit Euler-Taylor and Milstein-Taylor methods are very promising methods for stiff stochastic differential equations.
Resumo:
In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.
Resumo:
Codes C-1,...,C-M of length it over F-q and an M x N matrix A over F-q define a matrix-product code C = [C-1 (...) C-M] (.) A consisting of all matrix products [c(1) (...) c(M)] (.) A. This generalizes the (u/u + v)-, (u + v + w/2u + v/u)-, (a + x/b + x/a + b + x)-, (u + v/u - v)- etc. constructions. We study matrix-product codes using Linear Algebra. This provides a basis for a unified analysis of /C/, d(C), the minimum Hamming distance of C, and C-perpendicular to. It also reveals an interesting connection with MDS codes. We determine /C/ when A is non-singular. To underbound d(C), we need A to be 'non-singular by columns (NSC)'. We investigate NSC matrices. We show that Generalized Reed-Muller codes are iterative NSC matrix-product codes, generalizing the construction of Reed-Muller codes, as are the ternary 'Main Sequence codes'. We obtain a simpler proof of the minimum Hamming distance of such families of codes. If A is square and NSC, C-perpendicular to can be described using C-1(perpendicular to),...,C-M(perpendicular to) and a transformation of A. This yields d(C-perpendicular to). Finally we show that an NSC matrix-product code is a generalized concatenated code.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.
Resumo:
Petrov-Galerkin methods are known to be versatile techniques for the solution of a wide variety of convection-dispersion transport problems, including those involving steep gradients. but have hitherto received little attention by chemical engineers. We illustrate the technique by means of the well-known problem of simultaneous diffusion and adsorption in a spherical sorbent pellet comprised of spherical, non-overlapping microparticles of uniform size and investigate the uptake dynamics. Solutions to adsorption problems exhibit steep gradients when macropore diffusion controls or micropore diffusion controls, and the application of classical numerical methods to such problems can present difficulties. In this paper, a semi-discrete Petrov-Galerkin finite element method for numerically solving adsorption problems with steep gradients in bidisperse solids is presented. The numerical solution was found to match the analytical solution when the adsorption isotherm is linear and the diffusivities are constant. Computed results for the Langmuir isotherm and non-constant diffusivity in microparticle are numerically evaluated for comparison with results of a fitted-mesh collocation method, which was proposed by Liu and Bhatia (Comput. Chem. Engng. 23 (1999) 933-943). The new method is simple, highly efficient, and well-suited to a variety of adsorption and desorption problems involving steep gradients. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Some efficient solution techniques for solving models of noncatalytic gas-solid and fluid-solid reactions are presented. These models include those with non-constant diffusivities for which the formulation reduces to that of a convection-diffusion problem. A singular perturbation problem results for such models in the presence of a large Thiele modulus, for which the classical numerical methods can present difficulties. For the convection-diffusion like case, the time-dependent partial differential equations are transformed by a semi-discrete Petrov-Galerkin finite element method into a system of ordinary differential equations of the initial-value type that can be readily solved. In the presence of a constant diffusivity, in slab geometry the convection-like terms are absent, and the combination of a fitted mesh finite difference method with a predictor-corrector method is used to solve the problem. Both the methods are found to converge, and general reaction rate forms can be treated. These methods are simple and highly efficient for arbitrary particle geometry and parameters, including a large Thiele modulus. (C) 2001 Elsevier Science Ltd. All rights reserved.