986 resultados para Standard conditions
Resumo:
The use of acid etchants to produce surface demineralization and collagen network exposure, allowing adhesive monomers interdiffusion and consequently the formation of a hybrid layer, has been considered the most efficient mechanism of dentin bonding. The aim of this study was to compare the tensile bond strength to dentin of three adhesive systems, two self-etching ones (Clearfil SE Bond - CSEB and One Up Bond F - OUBF) and one total-etching one (Single Bond - SB), under three dentinal substrate conditions (wet, dry and re-wet). Ninety human, freshly extracted third molars were sectioned at the occlusal surface to remove enamel and to form a flat dentin wall. The specimens were restored with composite resin (Filtek Z250) and submitted to tensile bond strength testing (TBS) in an MTS 810. The data were submitted to two-way ANOVA and Tukey's test (p = 0.05). Wet dentin presented the highest TBS values for SB and CSEB. Dry dentin and re-wet produced significantly lower TBS values when using SB. OUBF was not affected by the different conditions of the dentin substrate, producing similar TBS values regardless of the surface pretreatments.
Resumo:
This paper presents some methodologies for reactive energy measurement, considering three modern power theories that are suitable for three-phase four-wire non-sinusoidal and unbalanced circuits. The theories were applied in some profiles collected in electrical distribution systems which have real characteristics for voltages and currents measured by commercial reactive energy meters. The experimental results are presented in order to analyze the accuracy of the methodologies, considering the standard IEEE 1459-2010 as a reference. Finally, for additional comparisons, the theories will be confronted with the modern Yokogawa WT3000 energy meter and three samples of a commercial energy meter through an experimental setup. © 2011 IEEE.
Resumo:
This paper presents a practical experimentation for comparing reactive/non-active energy measures, considering three-phase four-wire non-sinusoidal and unbalanced circuits, involving five different commercial electronic meters. The experimentation set provides separately voltage and current generation, each one with any waveform involving up to fifty-first harmonic components, identically compared with acquisitions obtained from utility. The experimental accuracy is guaranteed by a class A power analyzer, according to IEC61000-4-30 standard. Some current and voltage combination profiles are presented and confronted with two different references of reactive/non-active calculation methodologies; instantaneous power theory and IEEE 1459-2010. The first methodology considers the instantaneous power theory, present into the advanced mathematical internal algorithm from WT3000 power analyzer, and the second methodology, accomplish with IEEE 1459-2010 standard, uses waveform voltage and current acquisition from WT3000 as input data for a virtual meter developed on Mathlab/Simulink software. © 2012 IEEE.
Resumo:
Different conditions of extraction using water, a methanol-water mixture and nitric acid solutions were evaluated for speciation of As(iii), As(v), DMA and MMA in plant samples that previously received As(v) after being sown and emergence was investigated. Microwave-assisted extraction (MAE) using diluted nitric acid solutions was also performed for arsenic extraction from chicken feed samples. The separation and determination of arsenic species were performed using HPLC-ICP-MS. The interference standard method (IFS) using 83Kr+ as the IFS probe was employed to minimize spectral interferences caused by polyatomic species, such as 40Ar 35Cl+. The extraction procedures tested presented adequate extraction efficiencies (90%), and the four arsenic species evaluated were found in plant samples. Extractions with diluted nitric acid solution at 90 °C were the most efficient strategy, with quantitative recoveries for all four As species in plant tissues. On the other hand, the methanol-water mixture was the solvent with the lowest extraction efficiency (50-60%). For chicken feed samples, MAE at 100 °C for 30 min resulted in an extraction efficiency of 97% and only As(v) was found, without any species interconversion. The IFS method contributed to improving precision and limits of detection and quantification for all tested extraction procedures. Significant improvements on accuracy were obtained by applying the IFS method and recoveries between 77 and 94%, and 82 and 93% were obtained for plant extracts and chicken feed samples, respectively. This journal is © 2013 The Royal Society of Chemistry.
Resumo:
The current study used strain gauge analysis to perform an in vitro evaluation of the effect of axial and non-axial loading on implant-supported fixed partial prostheses, varying the implant placement configurations and the loading points. Three internal hexagon implants were embedded in the center of each polyurethane block with in-line and offset placements. Microunit abutments were connected to the implants using a torque of 20 N.cm, and plastic prosthetic cylinders were screwed onto the abutments, which received standard patterns cast in Co-Cr alloy (n = 10). Four strain gauges (SGs) were bonded onto the surfaces of the blocks, tangentially to the implants: SG 01 mesially to implant 1, SG 02 and SG 03 mesially and distally to implant 2, respectively, and SG 04 distally to implant 3. Each metallic structure was screwed onto the abutments using a 10-N.cm torque, and axial and non-axial loads of 30 kg were applied at 5 predetermined points. The data obtained from the strain gauge analyses were analyzed statistically through the repeated measures analysis of variance and the Tukey test, with a conventional level of significance of P < 0.05. The results showed a statistically significant difference for the loading point (P = 0.0001), with point E (nonaxial) generating the highest microstrain (327.67 mu epsilon) and point A (axial) generating the smallest microstrain (208.93 mu epsilon). No statistically significant difference was found for implant placement configuration (P = 0.856). It was concluded that the offset implant placement did not reduce the magnitude of microstrain around the implants under axial and non-axial loading conditions, although loading location did influence this magnitude.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
To refine methods of electroretinographical (ERG) recording for the analysis of low retinal potentials under scotopic conditions in advanced retinal degenerative diseases. Standard Ganzfeld ERG equipment (Diagnosys LLC, Cambridge, UK) was used in 27 healthy volunteers (mean age 28 +/- A SD 8.5 years) to define the stimulation protocol. The protocol was then applied in clinical routine and 992 recordings were obtained from patients (mean age 40.6 +/- A 18.3 years) over a period of 5 years. A blue stimulus with a flicker frequency of 9 Hz was specified under scotopic conditions to preferentially record rod-driven responses. A range of stimulus strengths (0.0000012-6.32 scot. cd s/mA(2) and 6-14 ms flash duration) was tested for maximal amplitudes and interference between rods and cones. Analysis of results was done by standard Fourier Transformation and assessment of signal-to-noise ratio. Optimized stimulus parameters were found to be a time-integrated luminance of 0.012 scot. cd s/mA(2) using a blue (470 nm) flash of 10 ms duration at a repetition frequency of 9 Hz. Characteristic stimulus strength versus amplitude curves and tests with stimuli of red or green wavelength suggest a predominant rod-system response. The 9 Hz response was found statistically distinguishable from noise in 38% of patients with otherwise non-recordable rod responses according to International Society for Clinical Electrophysiology of Vision standards. Thus, we believe this protocol can be used to record ERG potentials in patients with advanced retinal diseases and in the evaluation of potential treatments for these patients. The ease of implementation in clinical routine and of statistical evaluation providing an observer-independent evaluation may further facilitate its employment.
Resumo:
Implementing precise techniques in routine diagnosis of chronic granulomatous disease (CGD), which expedite the screening of molecular defects, may be critical for a quick assumption of patient prognosis. This study compared the efficacy of single-strand conformation polymorphism analysis (SSCP) and high-performance liquid chromatography under partially denaturing conditions (dHPLC) for screening mutations in CGD patients. We selected 10 male CGD patients with a clinical history of severe recurrent infections and abnormal respiratory burst function. gDNA, mRNA and cDNA samples were prepared by standard methods. CYBB exons were amplified by PCR and screened by SSCP or dHPLC. Abnormal DNA fragments were sequenced to reveal the nature of the mutations. The SSCP and dHPLC methods showed DNA abnormalities, respectively, in 55% and 100% of the cases. Sequencing of the abnormal DNA samples confirmed mutations in all cases. Four novel mutations in CYBB were identified which were picked up only by the dHPLC screening (c.904 insC, c.141+5 g>t, c.553 T>C, and c.665 A>T). This work highlights the relevance of dHPLC, a sensitive, fast, reliable and cost-effective method for screening mutations in CGD, which in combination with functional assays assessing the phagocyte respiratory burst will contribute to expedite the definitive diagnosis of X-linked CGD, direct treatment, genetic counselling and to have a clear assumption of the prognosis. This strategy is especially suitable for developing countries.
Resumo:
We consider a discrete-time financial model in a general sample space with penalty costs on short positions. We consider a friction market closely related to the standard one except that withdrawals from the portfolio value proportional to short positions are made. We provide necessary and sufficient conditions for the nonexistence of arbitrages in this situation and for a self-financing strategy to replicate a contingent claim. For the finite-sample space case, this result leads to an explicit and constructive procedure for obtaining perfect hedging strategies.
Resumo:
Objectives: This study evaluated the degree of conversion (DC) and working time (WT) of two commercial, dual-cured resin cements polymerized at varying temperatures and under different curing-light accessible conditions, using Fourier transformed infrared analysis (FTIR). Materials and Methods: Calibra (Cal; Dentsply Caulk) and Variolink II (Ivoclar Vivadent) were tested at 25 degrees C or preheated to 37 degrees C or 50 degrees C and applied to a similar-temperature surface of a horizontal attenuated-total-reflectance unit (ATR) attached to an infrared spectrometer. The products were polymerized using one of four conditions: direct light exposure only (600 mW/cm(2)) through a glass slide or through a 1.5- or 3.0-mm-thick ceramic disc (A2 shade, IPS e.max, Ivoclar Vivadent) or allowed to self-cure in the absence of light curing. FTIR spectra were recorded for 20 min (1 spectrum/s, 16 scans/spectrum, resolution 4 cm(-1)) immediately after application to the ATR. DC was calculated using standard techniques of observing changes in aliphatic-to-aromatic peak ratios precuring and 20-min postcuring as well as during each 1-second interval. Time-based monomer conversion analysis was used to determine WT at each temperature. DC and WT data (n=6) were analyzed by two-way analysis of variance and Tukey post hoc test (p=0.05). Results: Higher temperatures increased DC regardless of curing mode and product. For Calibra, only the 3-mm-thick ceramic group showed lower DC than the other groups at 25 degrees C (p=0.01830), while no significant difference was observed among groups at 37 degrees C and 50 degrees C. For Variolink, the 3-mm-thick ceramic group showed lower DC than the 1-mm-thick group only at 25 degrees C, while the self-cure group showed lower DC than the others at all temperatures (p=0.00001). WT decreased with increasing temperature: at 37 degrees C near 70% reduction and at 50 degrees C near 90% for both products, with WT reduction reaching clinically inappropriate times in some cases (p=0.00001). Conclusion: Elevated temperature during polymerization of dual-cured cements increased DC. WT was reduced with elevated temperature, but the extent of reduction might not be clinically acceptable.
Resumo:
Reinforced concrete beam elements are submitted to applicable loads along their life cycle that cause shear and torsion. These elements may be subject to only shear, pure torsion or both, torsion and shear combined. The Brazilian Standard Code ABNT NBR 6118:2007 [1] fixes conditions to calculate the transverse reinforcement area in beam reinforced concrete elements, using two design models, based on the strut and tie analogy model, first studied by Mörsch [2]. The strut angle θ (theta) can be considered constant and equal to 45º (Model I), or varying between 30º and 45º (Model II). In the case of transversal ties (stirrups), the variation of angle α (alpha) is between 45º and 90º. When the equilibrium torsion is required, a resistant model based on space truss with hollow section is considered. The space truss admits an inclination angle θ between 30º and 45º, in accordance with beam elements subjected to shear. This paper presents a theoretical study of models I and II for combined shear and torsion, in which ranges the geometry and intensity of action in reinforced concrete beams, aimed to verify the consumption of transverse reinforcement in accordance with the calculation model adopted As the strut angle on model II ranges from 30º to 45º, transverse reinforcement area (Asw) decreases, and total reinforcement area, which includes longitudinal torsion reinforcement (Asℓ), increases. It appears that, when considering model II with strut angle above 40º, under shear only, transverse reinforcement area increases 22% compared to values obtained using model I.
Resumo:
The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
The Jing Ltd. miniature combustion aerosol standard (Mini-CAST) soot generator is a portable, commercially available burner that is widely used for laboratory measurements of soot processes. While many studies have used the Mini-CAST to generate soot with known size, concentration, and organic carbon fraction under a single or few conditions, there has been no systematic study of the burner operation over a wide range of operating conditions. Here, we present a comprehensive characterization of the microphysical, chemical, morphological, and hygroscopic properties of Mini-CAST soot over the full range of oxidation air and mixing N-2 flow rates. Very fuel-rich and fuel-lean flame conditions are found to produce organic-dominated soot with mode diameters of 10-60nm, and the highest particle number concentrations are produced under fuel-rich conditions. The lowest organic fraction and largest diameter soot (70-130nm) occur under slightly fuel-lean conditions. Moving from fuel-rich to fuel-lean conditions also increases the O:C ratio of the soot coatings from similar to 0.05 to similar to 0.25, which causes a small fraction of the particles to act as cloud condensation nuclei near the Kelvin limit (kappa similar to 0-10(-3)). Comparison of these property ranges to those reported in the literature for aircraft and diesel engine soots indicates that the Mini-CAST soot is similar to real-world primary soot particles, which lends itself to a variety of process-based soot studies. The trends in soot properties uncovered here will guide selection of burner operating conditions to achieve optimum soot properties that are most relevant to such studies.
Resumo:
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the dataset, operating characteristics of the commonly used genotyping procedures and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard.