981 resultados para Beyond Standard Model
Resumo:
Objective To evaluate the occurrence of severe obstetric complications associated with antepartum and intrapartum hemorrhage among women from the Brazilian Network for Surveillance of Severe Maternal Morbidity.Design Multicenter cross-sectional study.Setting Twenty-seven obstetric referral units in Brazil between July 2009 and June 2010.Population A total of 9555 women categorized as having obstetric complications.Methods The occurrence of potentially life-threatening conditions, maternal near miss and maternal deaths associated with antepartum and intrapartum hemorrhage was evaluated. Sociodemographic and obstetric characteristics and the use of criteria for management of severe bleeding were also assessed in these women.Main outcome measures The prevalence ratios with their respective 95% confidence intervals adjusted for the cluster effect of the design, and multiple logistic regression analysis were performed to identify factors independently associated with the occurrence of severe maternal outcome.Results Antepartum and intrapartum hemorrhage occurred in only 8% (767) of women experiencing any type of obstetric complication. However, it was responsible for 18.2% (140) of maternal near miss and 10% (14) of maternal death cases. On multivariate analysis, maternal age and previous cesarean section were shown to be independently associated with an increased risk of severe maternal outcome (near miss or death).Conclusion Severe maternal outcome due to antepartum and intrapartum hemorrhage was highly prevalent among Brazilian women. Certain risk factors, maternal age and previous cesarean delivery in particular, were associated with the occurrence of bleeding.
Resumo:
The aim of this study was to evaluate the stress distribution in the cervical region of a sound upper central incisor in two clinical situations, standard and maximum masticatory forces, by means of a 3D model with the highest possible level of fidelity to the anatomic dimensions. Two models with 331,887 linear tetrahedral elements that represent a sound upper central incisor with periodontal ligament, cortical and trabecular bones were loaded at 45º in relation to the tooth's long axis. All structures were considered to be homogeneous and isotropic, with the exception of the enamel (anisotropic). A standard masticatory force (100 N) was simulated on one of the models, while on the other one a maximum masticatory force was simulated (235.9 N). The software used were: PATRAN for pre- and post-processing and Nastran for processing. In the cementoenamel junction area, tensile forces reached 14.7 MPa in the 100 N model, and 40.2 MPa in the 235.9 N model, exceeding the enamel's tensile strength (16.7 MPa). The fact that the stress concentration in the amelodentinal junction exceeded the enamel's tensile strength under simulated conditions of maximum masticatory force suggests the possibility of the occurrence of non-carious cervical lesions such as abfractions.
Resumo:
A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.
Resumo:
Ecological systems are vulnerable to irreversible change when key system properties are pushed over thresholds, resulting in the loss of resilience and the precipitation of a regime shift. Perhaps the most important of such properties in human-modified landscapes is the total amount of remnant native vegetation. In a seminal study Andren proposed the existence of a fragmentation threshold in the total amount of remnant vegetation, below which landscape-scale connectivity is eroded and local species richness and abundance become dependent on patch size. Despite the fact that species patch-area effects have been a mainstay of conservation science there has yet to be a robust empirical evaluation of this hypothesis. Here we present and test a new conceptual model describing the mechanisms and consequences of biodiversity change in fragmented landscapes, identifying the fragmentation threshold as a first step in a positive feedback mechanism that has the capacity to impair ecological resilience, and drive a regime shift in biodiversity. The model considers that local extinction risk is defined by patch size, and immigration rates by landscape vegetation cover, and that the recovery from local species losses depends upon the landscape species pool. Using a unique dataset on the distribution of non-volant small mammals across replicate landscapes in the Atlantic forest of Brazil, we found strong evidence for our model predictions - that patch-area effects are evident only at intermediate levels of total forest cover, where landscape diversity is still high and opportunities for enhancing biodiversity through local management are greatest. Furthermore, high levels of forest loss can push native biota through an extinction filter, and result in the abrupt, landscape-wide loss of forest-specialist taxa, ecological resilience and management effectiveness. The proposed model links hitherto distinct theoretical approaches within a single framework, providing a powerful tool for analysing the potential effectiveness of management interventions.
Resumo:
We introduce a simple mean-field lattice model to describe the behavior of nematic elastomers. This model combines the Maier-Saupe-Zwanzig approach to liquid crystals and an extension to lattice systems of the Warner-Terentjev theory of elasticity, with the addition of quenched random fields. We use standard techniques of statistical mechanics to obtain analytic solutions for the full range of parameters. Among other results, we show the existence of a stress-strain coexistence curve below a freezing temperature, analogous to the P-V diagram of a simple fluid, with the disorder strength playing the role of temperature. Below a critical value of disorder, the tie lines in this diagram resemble the experimental stress-strain plateau and may be interpreted as signatures of the characteristic polydomain-monodomain transition. Also, in the monodomain case, we show that random fields may soften the first-order transition between nematic and isotropic phases, provided the samples are formed in the nematic state.
Resumo:
We propose a model for D(+)->pi(+)pi(-)pi(+) decays following experimental results which indicate that the two-pion interaction in the S wave is dominated by the scalar resonances f(0)(600)/sigma and f(0)(980). The weak decay amplitude for D(+)-> R pi(+), where R is a resonance that subsequently decays into pi(+)pi(-), is constructed in a factorization approach. In the S wave, we implement the strong decay R ->pi(+)pi(-) by means of a scalar form factor. This provides a unitary description of the pion-pion interaction in the entire kinematically allowed mass range m(pi pi)(2) from threshold to about 3 GeV(2). In order to reproduce the experimental Dalitz plot for D(+)->pi(+)pi(-)pi(+), we include contributions beyond the S wave. For the P wave, dominated by the rho(770)(0), we use a Breit-Wigner description. Higher waves are accounted for by using the usual isobar prescription for the f(2)(1270) and rho(1450)(0). The major achievement is a good reproduction of the experimental m(pi pi)(2) distribution, and of the partial as well as the total D(+)->pi(+)pi(-)pi(+) branching ratios. Our values are generally smaller than the experimental ones. We discuss this shortcoming and, as a by-product, we predict a value for the poorly known D ->sigma transition form factor at q(2)=m pi(2).
Resumo:
The efficacy of fluorescence spectroscopy to detect squamous cell carcinoma is evaluated in an animal model following laser excitation at 442 and 532 nm. Lesions are chemically induced with a topical DMBA application at the left lateral tongue of Golden Syrian hamsters. The animals are investigated every 2 weeks after the 4th week of induction until a total of 26 weeks. The right lateral tongue of each animal is considered as a control site (normal contralateral tissue) and the induced lesions are analyzed as a set of points covering the entire clinically detectable area. Based on fluorescence spectral differences, four indices are determined to discriminate normal and carcinoma tissues, based on intraspectral analysis. The spectral data are also analyzed using a multivariate data analysis and the results are compared with histology as the diagnostic gold standard. The best result achieved is for blue excitation using the KNN (K-nearest neighbor, a interspectral analysis) algorithm with a sensitivity of 95.7% and a specificity of 91.6%. These high indices indicate that fluorescence spectroscopy may constitute a fast noninvasive auxiliary tool for diagnostic of cancer within the oral cavity. (C) 2008 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work proposes a refined technique for the extraction of the generation lifetime in single- and double-gate partially depleted SOI nMOSFETs. The model presented in this paper, based on the drain current switch-off transients, takes into account the influence of the laterally non-uniform channel doping, caused by the presence of the halo implanted region, and the amount of charge controlled by the drain and source junctions on the floating body effect when the channel length is reduced. The obtained results for single- gate (SG) devices are compared with two-dimensional numerical simulations and experimental data, extracted for devices fabricated in a 0.1 mu m SOI CMOS technology, showing excellent agreement. The improved model to determine the generation lifetime in double-gate (DG) devices beyond the considerations previously presented also consider the influence of the silicon layer thickness on the drain current transient. The extracted data through the improved model for DG devices were compared with measurements and two-dimensional numerical simulations of the SG devices also presenting a good adjustment with the channel length reduction and the same tendency with the silicon layer thickness variation.
Resumo:
The TCP/IP architecture was consolidated as a standard to the distributed systems. However, there are several researches and discussions about alternatives to the evolution of this architecture and, in this study area, this work presents the Title Model to contribute with the application needs support by the cross layer ontology use and the horizontal addressing, in a next generation Internet. For a practical viewpoint, is showed the network cost reduction for the distributed programming example, in networks with layer 2 connectivity. To prove the title model enhancement, it is presented the network analysis performed for the message passing interface, sending a vector of integers and returning its sum. By this analysis, it is confirmed that the current proposal allows, in this environment, a reduction of 15,23% over the total network traffic, in bytes.
Resumo:
We introduce the log-beta Weibull regression model based on the beta Weibull distribution (Famoye et al., 2005; Lee et al., 2007). We derive expansions for the moment generating function which do not depend on complicated functions. The new regression model represents a parametric family of models that includes as sub-models several widely known regression models that can be applied to censored survival data. We employ a frequentist analysis, a jackknife estimator, and a parametric bootstrap for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Further, for different parameter settings, sample sizes, and censoring percentages, several simulations are performed. In addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be extended to a modified deviance residual in the proposed regression model applied to censored data. We define martingale and deviance residuals to evaluate the model assumptions. The extended regression model is very useful for the analysis of real data and could give more realistic fits than other special regression models.
Resumo:
In stored grains, smaller depositions and great variation with respect to theoretical insecticide doses are frequently found. The objective of this work was to study the effectiveness of the standard method (ISO 5682/1-1996) employed to evaluate hydraulic nozzles used in stored corn and wheat grain protection experiments. The transversal volumetric distribution and droplet spectrum of a model TJ-60 8002EVS nozzle were determined in order to calibrate a spraying system for an application rate of 5 L/t and to obtain theoretical concentrations of 10 and 0.5 mg/kg of fenitrothion and esfenvalerate, respectively. After treatment, the corn and wheat grains were processed and deposition was analyzed by gas chromatography. The type of grain did not have any influence on insecticide deposition and was dependent upon insecticide only. The insecticide deposits on the grains only reached 42.1 and 38.2% of the intended theoretical values for fenitrothion and esfenvalerate concentrations, respectively. These results demonstrate the ineffectiveness of the standard evaluation method for hydraulic nozzles employed in stored grain protection experiments.
Resumo:
Sunless tanning formulas have become increasingly popular in recent years for their ability to give people convincing tans without the dangers of skin cancer. Most sunless tanners currently on the market contain dihydroxyacetone (DHA), a keto sugar with three carbons. The temporary pigment provided by these formulasis designed to resemble a UV-induced tan. This study evaluated the effectiveness of carbomer gels and cold process self emulsifying bases on skin pigmentation, using different concentrations of a chemical system composed of DHA and N-acetyl tyrosine, which are found in moulted snake skins and their effectiveness was tested by Mexameter (R) MX 18. Eight different sunless tanning formulas were developed, four of which were gels and four of which were emulsions (base, base plus 4.0%, 5.0% and 6.0% (w/w) of a system of DHA and N-acetyl tyrosine). Tests to determine the extent of artificial tanning were done by applying 30 mg cm(-2) of each formula onto standard sizes of moulted snake skin (2.0 cm x 3.0 cm). A Mexameter (R) MX 18 was used to evaluate the extent of coloration in the moulted snake skin at T(0) (before the application) and after 24, 48, 72, 168, 192 and 216 h. The moulted snake skins can be used as an alternative membrane model for in vitro sunless tanning efficacy tests due to their similarity to the human stratum corneum. The DHA concentration was found to influence the initiation of the pigmentation in both sunless tanning systems (emulsion and gel) as well as the time required to increases by a given amount on the tanning index. In the emulsion system, the DHA concentration also influenced the final value on the tanning index. The type of system (emulsion or gel) has no influence on the final value in the tanning index after 216 h for samples with the same DHA concentration.