925 resultados para Weighted histogram analysis method
Resumo:
Since 1964, the Center for Geochronological Research - CPGeo, one of the interdepartmental centers of the Instituto de Geociências (IG) of São Paulo University, has developed studies related to several geological processes associated with different rock types. Thermal Ionization Mass Spectrometry Isotopic Dilution (ID-TIMS) has been the technique widely used in the CPGeo U-Pb Laboratory. It provides reliable and accurate results in age determination of superposed events. However, the open-system behavior such as Pb-loss, the inheritance problem and metamictization processes allow and impel us to a much richer understanding of the power and limitations of U-Pb geochronology and thermochronology. In this article, we present the current methodology used at the CPGeo-IGc-USP U-Pb laboratory, the improvements on ID-TIMS method, and report high-precision U-Pb data from zircon, monazite, epidote, titanite, baddeleyite and rutile from different rock types of several domains of the Brazilian south-southeast area, Argentina and Uruguay.
Resumo:
Interactions of cationic dye methylene blue (MB) with clay particles in aqueous suspension have been extensively studied. As already known, the number of natural negative charges on the clay modifies significantly the particle sizes dispersed in water and therefore the nature of the interaction with the dye. This work evaluated with UV-Vis spectroscopy method how the clay particle sizes weighted on the adsorption and rearrangement of the dye molecules in aqueous system. The results obtained from light-scattering measurements confirmed that larger particles are found in suspensions containing the high-charged clays as the visible absorption band related to the MB aggregates (570 nm) on these suspensions prevailed.
Resumo:
A simple and fast capillary zone electrophoresis (CZE) method has been developed and validated for quantification of a non-nucleoside reverse transcriptase inhibitor (NNRTI) nevirapine, in pharmaceuticals. The analysis was optimized using 10 mmol L-1 sodium phosphate buffer pH 2.5, +25 kV applied voltage, hydrodynamic injection 0.5 psi for 5 s and direct UV detection at 200 µm. Diazepam (50.0 µg mL-1) was used as internal standard. Under these conditions, nevirapine was analyzed in approximately less than 2.5 min. The analytical curve presented a coefficient of correlation of 0.9994. Limits of detection and quantification were 1.4 µg mL-1 and 4.3 µg mL-1, respectively. Intra- and inter-day precision expressed as relative standard deviations were 1.4% and 1.3%, respectively and the mean recovery was 100.81%. The active pharmaceutical ingredient was subjected to hydrolysis (acid, basic and neutral) and oxidative stress conditions. No interference of degradation products and tablet excipients were observed. This method showed to be rapid, simple, precise, accurate and economical for determination of nevirapine in pharmaceuticals and it is suitable for routine quality control analysis since CE offers benefits in terms of quicker method development and significantly reduced operating costs.
Resumo:
A photometric procedure for the determination of ClO(-) in tap water employing a miniaturized multicommuted flow analysis setup and an LED-based photometer is described. The analytical procedure was implemented using leucocrystal violet (LCV; 4,4', 4 ''-methylidynetris (N, N-dimethylaniline), C(25)H(31)N(3)) as a chromogenic reagent. Solenoid micropumps employed for solutions propelling were assembled together with the photometer in order to compose a compact unit of small dimensions. After control variables optimization, the system was applied for the determination of ClO(-) in samples of tap water, and aiming accuracy assessment samples were also analyzed using an independent method. Applying the paired t-test between results obtained using both methods, no significant difference at the 95% confidence level was observed. Other useful features include low reagent consumption, 2.4 mu g of LCV per determination, a linear response ranging from 0.02 up to 2.0 mg L(-1) ClO(-), a relative standard deviation of 1.0% (n = 11) for samples containing 0.2 mg L(-1) ClO(-), a detection limit of 6.0 mu g L(-1) ClO(-), a sampling throughput of 84 determinations per hour, and a waste generation of 432 mu L per determination.
Resumo:
This work presents a fully non-linear finite element formulation for shell analysis comprising linear strain variation along the thickness of the shell and geometrically exact description for curved triangular elements. The developed formulation assumes positions and generalized unconstrained vectors as the variables of the problem, not displacements and finite rotations. The full 3D Saint-Venant-Kirchhoff constitutive relation is adopted and, to avoid locking, the rate of thickness variation enhancement is introduced. As a consequence, the second Piola-Kirchhoff stress tensor and the Green strain measure are employed to derive the specific strain energy potential. Curved triangular elements with cubic approximation are adopted using simple notation. Selected numerical simulations illustrate and confirm the objectivity, accuracy, path independence and applicability of the proposed technique.
Resumo:
The alkali-aggregate reaction (AAR) is a chemical reaction that provokes a heterogeneous expansion of concrete and reduces important properties such as Young's modulus, leading to a reduction in the structure's useful life. In this study, a parametric model is employed to determine the spatial distribution of the concrete expansion, combining normalized factors that influence the reaction through an AAR expansion law. Optimization techniques were employed to adjust the numerical results and observations in a real structure. A three-dimensional version of the model has been implemented in a finite element commercial package (ANSYS(C)) and verified in the analysis of an accelerated mortar test. Comparisons were made between two AAR mathematical descriptions for the mechanical phenomenon, using the same methodology, and an expansion curve obtained from experiment. Some parametric studies are also presented. The numerical results compared very well with the experimental data validating the proposed method.
Resumo:
Carrying out information about the microstructure and stress behaviour of ferromagnetic steels, magnetic Barkhausen noise (MBN) has been used as a basis for effective non-destructive testing methods, opening new areas in industrial applications. One of the factors that determines the quality and reliability of the MBN analysis is the way information is extracted from the signal. Commonly, simple scalar parameters are used to characterize the information content, such as amplitude maxima and signal root mean square. This paper presents a new approach based on the time-frequency analysis. The experimental test case relates the use of MBN signals to characterize hardness gradients in a AISI4140 steel. To that purpose different time-frequency (TFR) and time-scale (TSR) representations such as the spectrogram, the Wigner-Ville distribution, the Capongram, the ARgram obtained from an AutoRegressive model, the scalogram, and the Mellingram obtained from a Mellin transform are assessed. It is shown that, due to nonstationary characteristics of the MBN, TFRs can provide a rich and new panorama of these signals. Extraction techniques of some time-frequency parameters are used to allow a diagnostic process. Comparison with results obtained by the classical method highlights the improvement on the diagnosis provided by the method proposed.
Resumo:
In this work, the effects of indenter tip roundness oil the load-depth indentation curves were analyzed using finite element modeling. The tip roundness level was Studied based on the ratio between tip radius and maximum penetration depth (R/h(max)), which varied from 0.02 to 1. The proportional Curvature constant (C), the exponent of depth during loading (alpha), the initial unloading slope (S), the correction factor (beta), the level of piling-up or sinking-in (h(c)/h(max)), and the ratio h(max)/h(f) are shown to be strongly influenced by the ratio R/h(max). The hardness (H) was found to be independent of R/h(max) in the range studied. The Oliver and Pharr method was successful in following the variation of h(c)/h(max) with the ratio R/h(max) through the variation of S with the ratio R/h(max). However, this work confirmed the differences between the hardness values calculated using the Oliver-Pharr method and those obtained directly from finite element calculations; differences which derive from the error in area calculation that Occurs when given combinations of indented material properties are present. The ratio of plastic work to total work (W(p)/W(t)) was found to be independent of the ratio R/h(max), which demonstrates that the methods for the Calculation of mechanical properties based on the *indentation energy are potentially not Susceptible to errors caused by tip roundness.
Resumo:
This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM) presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named ""power deflation"", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.
Resumo:
The effects of chromium or nickel oxide additions on the composition of Portland clinker were investigated by X-ray powder diffraction associated with pattern analysis by the Rietveld method. The co-processing of industrial waste in Portland cement plants is an alternative solution to the problem of final disposal of hazardous waste. Industrial waste containing chromium or nickel is hazardous and is difficult to dispose of. It was observed that in concentrations up to 1% in mass, the chromium or nickel oxide additions do not cause significant alterations in Portland clinker composition. (C) 2008 International Centre for Diffraction Data.
Resumo:
Background: The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis. Results: We proposed a web platform named ProbFAST for MDE analysis which uses Bayesian inference to identify key genes that are intuitively prioritized by means of probabilities. A simulated study revealed that our method gives a better performance when compared to other approaches and when applied to public expression data, we demonstrated its flexibility to obtain relevant genes biologically associated with normal and abnormal biological processes. Conclusions: ProbFAST is a free accessible web-based application that enables MDE analysis on a global scale. It offers an efficient methodological approach for MDE analysis of a set of genes that are turned on and off related to functional information during the evolution of a tumor or tissue differentiation. ProbFAST server can be accessed at http://gdm.fmrp.usp.br/probfast.
Resumo:
Background: The inherent complexity of statistical methods and clinical phenomena compel researchers with diverse domains of expertise to work in interdisciplinary teams, where none of them have a complete knowledge in their counterpart's field. As a result, knowledge exchange may often be characterized by miscommunication leading to misinterpretation, ultimately resulting in errors in research and even clinical practice. Though communication has a central role in interdisciplinary collaboration and since miscommunication can have a negative impact on research processes, to the best of our knowledge, no study has yet explored how data analysis specialists and clinical researchers communicate over time. Methods/Principal Findings: We conducted qualitative analysis of encounters between clinical researchers and data analysis specialists (epidemiologist, clinical epidemiologist, and data mining specialist). These encounters were recorded and systematically analyzed using a grounded theory methodology for extraction of emerging themes, followed by data triangulation and analysis of negative cases for validation. A policy analysis was then performed using a system dynamics methodology looking for potential interventions to improve this process. Four major emerging themes were found. Definitions using lay language were frequently employed as a way to bridge the language gap between the specialties. Thought experiments presented a series of ""what if'' situations that helped clarify how the method or information from the other field would behave, if exposed to alternative situations, ultimately aiding in explaining their main objective. Metaphors and analogies were used to translate concepts across fields, from the unfamiliar to the familiar. Prolepsis was used to anticipate study outcomes, thus helping specialists understand the current context based on an understanding of their final goal. Conclusion/Significance: The communication between clinical researchers and data analysis specialists presents multiple challenges that can lead to errors.
Resumo:
Background: Detailed analysis of the dynamic interactions among biological, environmental, social, and economic factors that favour the spread of certain diseases is extremely useful for designing effective control strategies. Diseases like tuberculosis that kills somebody every 15 seconds in the world, require methods that take into account the disease dynamics to design truly efficient control and surveillance strategies. The usual and well established statistical approaches provide insights into the cause-effect relationships that favour disease transmission but they only estimate risk areas, spatial or temporal trends. Here we introduce a novel approach that allows figuring out the dynamical behaviour of the disease spreading. This information can subsequently be used to validate mathematical models of the dissemination process from which the underlying mechanisms that are responsible for this spreading could be inferred. Methodology/Principal Findings: The method presented here is based on the analysis of the spread of tuberculosis in a Brazilian endemic city during five consecutive years. The detailed analysis of the spatio-temporal correlation of the yearly geo-referenced data, using different characteristic times of the disease evolution, allowed us to trace the temporal path of the aetiological agent, to locate the sources of infection, and to characterize the dynamics of disease spreading. Consequently, the method also allowed for the identification of socio-economic factors that influence the process. Conclusions/Significance: The information obtained can contribute to more effective budget allocation, drug distribution and recruitment of human skilled resources, as well as guiding the design of vaccination programs. We propose that this novel strategy can also be applied to the evaluation of other diseases as well as other social processes.
Resumo:
Esophageal ulcer (EU) represents an important comorbidity in AIDS. We evaluated the prevalence of EU, the accuracy of the endoscopic and histologic methods used to investigate viral EU in HIV-positive Brazilian patients and the numerical relevance of tissue sampling. A total of 399 HIV-positive patients underwent upper gastrointestinal (UGI) endoscopy. HIV-positive patients with EU determined by UGI endoscopy followed by biopsies were analyzed by the hematoxylin-eosin (HE) and immunohistochemical (IH) methods. EU was detected in 41 patients (mean age, 39.2 years; 23 males), with a prevalence of 10.27%. The median CD4 count was 49 cells/mm(3) (range, 1-361 cells/mm(3)) and the viral load was 58,869 copies per milliliter (range, 50-77,3290 copies per milliliter). UGI endoscopy detected 29 of 41 EU suggestive of cytomegalovirus (CMV) infection and 7 of 41 indicating herpes simplex virus (HSV) infection. HE histology confirmed 4 of 29 ulcers induced by CMV, 2 of 7 induced by HSV, and 1 of 7 induced by HSV plus CMV. IH for CMV and HSV confirmed the HE findings and detected one additional CMV-induced case. UGI endoscopy showed 100% sensitivity and 15% specificity for the diagnosis of EU due to CMV or HSV compared to HE and IH. HE proved to be an adequate method for etiologic evaluation, with 87% sensitivity and 100% specificity compared to IH. The number of samples did not influence the etiologic evaluation. The data support the importance of IH as a complementary method for HE in the diagnosis of EU of viral etiology.
Resumo:
Background: Considering the broad variation in the expression of housekeeping genes among tissues and experimental situations, studies using quantitative RT-PCR require strict definition of adequate endogenous controls. For glioblastoma, the most common type of tumor in the central nervous system, there was no previous report regarding this issue. Results: Here we show that amongst seven frequently used housekeeping genes TBP and HPRT1 are adequate references for glioblastoma gene expression analysis. Evaluation of the expression levels of 12 target genes utilizing different endogenous controls revealed that the normalization method applied might introduce errors in the estimation of relative quantities. Genes presenting expression levels which do not significantly differ between tumor and normal tissues can be considered either increased or decreased if unsuitable reference genes are applied. Most importantly, genes showing significant differences in expression levels between tumor and normal tissues can be missed. We also demonstrated that the Holliday Junction Recognizing Protein, a novel DNA repair protein over expressed in lung cancer, is extremely over-expressed in glioblastoma, with a median change of about 134 fold. Conclusion: Altogether, our data show the relevance of previous validation of candidate control genes for each experimental model and indicate TBP plus HPRT1 as suitable references for studies on glioblastoma gene expression.