21 resultados para design or documentation process

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study evaluated the effect of specimens' design and manufacturing process on microtensile bond strength, internal stress distributions (Finite Element Analysis - FEA) and specimens' integrity by means of Scanning Electron Microscopy (SEM) and Laser Scanning Confocal Microscopy (LCM). Excite was applied to flat enamel surface and a resin composite build-ups were made incrementally with 1-mm increments of Tetric Ceram. Teeth were cut using a diamond disc or a diamond wire, obtaining 0.8 mm² stick-shaped specimens, or were shaped with a Micro Specimen Former, obtaining dumbbell-shaped specimens (n = 10). Samples were randomly selected for SEM and LCM analysis. Remaining samples underwent microtensile test, and results were analyzed with ANOVA and Tukey test. FEA dumbbell-shaped model resulted in a more homogeneous stress distribution. Nonetheless, they failed under lower bond strengths (21.83 ± 5.44 MPa)c than stick-shaped specimens (sectioned with wire: 42.93 ± 4.77 MPaª; sectioned with disc: 36.62 ± 3.63 MPa b), due to geometric irregularities related to manufacturing process, as noted in microscopic analyzes. It could be concluded that stick-shaped, nontrimmed specimens, sectioned with diamond wire, are preferred for enamel specimens as they can be prepared in a less destructive, easier, and more precise way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compliant mechanisms can achieve a specified motion as a mechanism without relying on the use of joints and pins. They have broad application in precision mechanical devices and Micro-Electro Mechanical Systems (MEMS) but may lose accuracy and produce undesirable displacements when subjected to temperature changes. These undesirable effects can be reduced by using sensors in combination with control techniques and/or by applying special design techniques to reduce such undesirable effects at the design stage, a process generally termed ""design for precision"". This paper describes a design for precision method based on a topology optimization method (TOM) for compliant mechanisms that includes thermal compensation features. The optimization problem emphasizes actuator accuracy and it is formulated to yield optimal compliant mechanism configurations that maximize the desired output displacement when a force is applied, while minimizing undesirable thermal effects. To demonstrate the effectiveness of the method, two-dimensional compliant mechanisms are designed considering thermal compensation, and their performance is compared with compliant mechanisms designs that do not consider thermal compensation. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to manufacturing or damage process, brittle materials present a large number of micro-cracks which are randomly distributed. The lifetime of these materials is governed by crack propagation under the applied mechanical and thermal loadings. In order to deal with these kinds of materials, the present work develops a boundary element method (BEM) model allowing for the analysis of multiple random crack propagation in plane structures. The adopted formulation is based on the dual BEM, for which singular and hyper-singular integral equations are used. An iterative scheme to predict the crack growth path and crack length increment is proposed. This scheme enables us to simulate the localization and coalescence phenomena, which are the main contribution of this paper. Considering the fracture mechanics approach, the displacement correlation technique is applied to evaluate the stress intensity factors. The propagation angle and the equivalent stress intensity factor are calculated using the theory of maximum circumferential stress. Examples of multi-fractured domains, loaded up to rupture, are considered to illustrate the applicability of the proposed method. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sodium diclofenac (SD) release from dosage forms has been studied under different conditions. However, no dissolution method that is discriminatory enough to reflect slight changes in formulation or manufacturing process, and which could be effectively correlated with the biological properties of the dosage form, has been reported. This study sought to develop three different formulae of SD-containing matrix tablets and to determine the effect of agitation speed in its dissolution profiles. F1, F2 and F3 formulations were developed using hypromellose (10, 20 and 30%, respectively for F1, F2 and F3) and other conventional excipients. Dissolution tests were carried out in phosphate buffer pH 6.8 at 37 degrees C using apparatus 11 at 50, 75 or 100 rpm. Dissolution efficiency (DE), T(50) and T(90) were determined and plotted as functions of the variables agitation speed and hypromellose concentration. Regarding DE, F2 showed more sensitivity to variations in agitation speed than F1 and F3. Increasing hypromellose concentration reduced DE values, independent of agitation speed. Analysis of T(50) and T(90) suggests that F1 is less sensitive to variations in agitation speed than F2 and F3. Most discriminatory dissolution conditions were observed at 50 rpm. Results suggest that the comparison of dissolution performance of SD matrix tablets should take into account polymer concentration and agitation conditions. (C) 2009 Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mitochondrial DNA (mtDNA) population data for forensic purposes are still scarce for some populations, which may limit the evaluation of forensic evidence especially when the rarity of a haplotype needs to be determined in a database search. In order to improve the collection of mtDNA lineages from the Iberian and South American subcontinents, we here report the results of a collaborative study involving nine laboratories from the Spanish and Portuguese Speaking Working Group of the International Society for Forensic Genetics (GHEP-ISFG) and EMPOP. The individual laboratories contributed population data that were generated throughout the past 10 years, but in the majority of cases have not been made available to the scientific community. A total of 1019 haplotypes from Iberia (Basque Country, 2 general Spanish populations, 2 North and 1 Central Portugal populations), and Latin America (3 populations from Sao Paulo) were collected, reviewed and harmonized according to defined EMPOP criteria. The majority of data ambiguities that were found during the reviewing process (41 in total) were transcription errors confirming that the documentation process is still the most error-prone stage in reporting mtDNA population data, especially when performed manually. This GHEP-EMPOP collaboration has significantly improved the quality of the individual mtDNA datasets and adds mtDNA population data as valuable resource to the EMPOP database (www.empop.org). (C) 2010 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first stars that formed after the Big Bang were probably massive(1), and they provided the Universe with the first elements heavier than helium (`metals`), which were incorporated into low-mass stars that have survived to the present(2,3). Eight stars in the oldest globular cluster in the Galaxy, NGC 6522, were found to have surface abundances consistent with the gas from which they formed being enriched by massive stars(4) (that is, with higher alpha-element/Fe and Eu/Fe ratios than those of the Sun). However, the same stars have anomalously high abundances of Ba and La with respect to Fe(4), which usually arises through nucleosynthesis in low-mass stars(5) (via the slow-neutron-capture process, or s-process). Recent theory suggests that metal-poor fast-rotating massive stars are able to boost the s-process yields by up to four orders of magnitude(6), which might provide a solution to this contradiction. Here we report a reanalysis of the earlier spectra, which reveals that Y and Sr are also over-abundant with respect to Fe, showing a large scatter similar to that observed in extremely metal-poor stars(7), whereas C abundances are not enhanced. This pattern is best explained as originating in metal-poor fast-rotating massive stars, which might point to a common property of the first stellar generations and even of the `first stars`.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estrogens exert important physiological effects through the modulation of two human estrogen receptor (hER) subtypes, alpa (hER alpha) and beta (hER beta). Because the levels and relative proportion of hER alpha and hER beta differ significantly in different target cells, selective hER ligands could target specific tissues or pathways regulated by one receptor subtype without affecting the other. To understand the structural and chemical basis by which small molecule modulators are able to discriminate between the two subtypes, we have applied three-dimensional target-based approaches employing a series of potent hER-ligands. Comparative molecular field analysis (CoMFA) studies were applied to a data set of 81 hER modulators, for which binding affinity values were collected for both hER alpha and hER beta. Significant statistical coefficients were obtained (hER alpha, q(2) = 0.76; hER beta, q(2) = 0.70), indicating the internal consistency of the models. The generated models were validated using external test sets, and the predicted values were in good agreement with the experimental results. Five hER crystal structures were used in GRID/PCA investigations to generate molecular interaction fields (MIF) maps. hER alpha and hER beta were separated using one factor. The resulting 3D information was integrated with the aim of revealing the most relevant structural features involved in hER subtype selectivity. The final QSAR and GRID/PCA models and the information gathered from 3D contour maps should be useful for the design or novel hER modulators with improved selectivity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The photo-Fenton process (Fe(2+)/Fe(3+), H(2)O(2), UV light) is one of the most efficient and advanced oxidation processes for the mineralization of the organic pollutants of industrial effluents and wastewater. The overall rate of the photo-Fenton process is controlled by the rate of the photolytic step that converts Fe(3+) back to Fe(2+). In this paper, the effect of sulfate or chloride ions on the net yield of Fe(2+) during the photolysis of Fe(3+) has been investigated in aqueous solution at pH 3.0 and 1.0 in the absence of hydrogen peroxide. A kinetic model based on the principal reactions that occur in the system fits the data for formation of Fe(2+) satisfactorily. Both experimental data and model prediction show that the availability of Fe(2+) produced by photolysis of Fe(3+) is inhibited much more in the presence of sulfate ion than in the presence of chloride ion as a function of the irradiation time at pH 3.0.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main objective of this research was to evaluate the potential use of a bench-scale anaerobic sequencing batch biofilm reactor (ASBBR) containing mineral coal as inert support for removal Of Sulfide and organic matter effluents from an ASBBR (1.2 m(3)) utilized for treatment of sulfate-rich wastewater. The cycle time was 48 h, including the steps of feeding (2 h), reaction with continuous liquid recirculation (44 h) and discharge (2 h). COD removal efficiency was up to 90% and the effluents total sulfide concentrations (H(2)S, HS(-), S(2-)) remained in the range of 1.5 to 7.5 mg.l(-1) during the 50 days of operation (25 cycles). The un-ionized Sulfide and ionized sulfides were converted by biological process to elemental sulfur (S(0)) under oxygen limited conditions. The results obtained in the bench-scale reactor were used to design an ASBBR in pilot scale for use in post-treatment to achieve the emission standards (sulfide and COD) for sulfate reduction. The pilot-scale reactor, with a total volume of 0.43 m(3), the COD and total sulfide removal achieved 88% and 57%, respectively, for a cycle time of 48 h (70 days of operation or 35 cycles).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The concrete offshore platforms, which are subjected a several loading combinations and, thus, requires an analysis more generic possible, can be designed using the concepts adopted to shell elements, but the resistance must be verify in particular cross-sections to shear forces. This work about design of shell elements will be make using the three-layer shell theory. The elements are subject to combined loading of membrane and plate, totalizing eight components of internal forces, which are three membrane forces, three moments (two out-of-plane bending moments and one in-plane, or torsion, moment) and two shear forces. The design method adopted, utilizing the iterative process proposed by Lourenco & Figueiras (1993) obtained from equations of equilibrium developed by Gupta (1896) , will be compared to results of experimentally tested shell elements found in the literature using the program DIANA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Micro-tools offer significant promise in a wide range of applications Such as cell Manipulation, microsurgery, and micro/nanotechnology processes. Such special micro-tools consist of multi-flexible structures actuated by two or more piezoceramic devices that must generate output displacements and forces lit different specified points of the domain and at different directions. The micro-tool Structure acts as a mechanical transformer by amplifying and changing the direction of the piezoceramics Output displacements. The design of these micro-tools involves minimization of the coupling among movements generated by various piezoceramics. To obtain enhanced micro-tool performance, the concept of multifunctional and functionally graded materials is extended by, tailoring elastic and piezoelectric properties Of the piezoceramics while simultaneously optimizing the multi-flexible structural configuration using multiphysics topology optimization. The design process considers the influence of piezoceramic property gradation and also its polarization sign. The method is implemented considering continuum material distribution with special interpolation of fictitious densities in the design domain. As examples, designs of a single piezoactuator, an XY nano-positioner actuated by two graded piezoceramics, and a micro-gripper actuated by three graded piezoceramics are considered. The results show that material gradation plays an important role to improve actuator performance, which may also lead to optimal displacements and coupling ratios with reduced amount of piezoelectric material. The present examples are limited to two-dimensional models because many of the applications for Such micro-tools are planar devices. Copyright (c) 2008 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.