955 resultados para NON-LINEAR MODELS
Resumo:
This paper focuses on the flexural behavior of RC beams externally strengthened with Carbon Fiber Reinforced Polymers (CFRP) fabric. A non-linear finite element (FE) analysis strategy is proposed to support the beam flexural behavior experimental analysis. A development system (QUEBRA2D/FEMOOP programs) has been used to accomplish the numerical simulation. Appropriate constitutive models for concrete, rebars, CFRP and bond-slip interfaces have been implemented and adjusted to represent the composite system behavior. Interface and truss finite elements have been implemented (discrete and embedded approaches) for the numerical representation of rebars, interfaces and composites.
Resumo:
This work examines the effect of weld strength mismatch on fracture toughness measurements defined by J and CTOD fracture parameters using single edge notch bend (SE(B)) specimens. A central objective of the present study is to enlarge on previous developments of J and CTOD estimation procedures for welded bend specimens based upon plastic eta factors (eta) and plastic rotational factors (r (p) ). Very detailed non-linear finite element analyses for plane-strain models of standard SE(B) fracture specimens with a notch located at the center of square groove welds and in the heat affected zone provide the evolution of load with increased crack mouth opening displacement required for the estimation procedure. One key result emerging from the analyses is that levels of weld strength mismatch within the range +/- 20% mismatch do not affect significantly J and CTOD estimation expressions applicable to homogeneous materials, particularly for deeply cracked fracture specimens with relatively large weld grooves. The present study provides additional understanding on the effect of weld strength mismatch on J and CTOD toughness measurements while, at the same time, adding a fairly extensive body of results to determine parameters J and CTOD for different materials using bend specimens with varying geometries and mismatch levels.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
It is recognized that vascular dispersion in the liver is a determinant of high first-pass extraction of solutes by that organ. Such dispersion is also required for translation of in-vitro microsomal activity into in-vivo predictions of hepatic extraction for any solute. We therefore investigated the relative dispersion of albumin transit times (CV2) in the livers of adult and weanling rats and in elasmobranch livers. The mean and normalized variance of the hepatic transit time distribution of albumin was estimated using parametric non-linear regression (with a correction for catheter influence) after an impulse (bolus) input of labelled albumin into a single-pass liver perfusion. The mean +/- s.e. of CV2 for albumin determined in each of the liver groups were 0.85 +/- 0.20 (n = 12), 1.48 +/- 0.33 (n = 7) and 0.90 +/- 0.18 (n = 4) for the livers of adult and weanling rats and elasmobranch livers, respectively. These CV2 are comparable with that reported previously for the dog and suggest that the CV2 Of the liver is of a similar order of magnitude irrespective of the age and morphological development of the species. It might, therefore, be justified, in the absence of other information, to predict the hepatic clearances and availabilities of highly extracted solutes by scaling within and between species livers using hepatic elimination models such as the dispersion model with a CV2 of approximately unity.
Resumo:
Previous magnetic resonance imaging (MRI) studies described consistent age-related gray matter (GM) reductions in the fronto-parietal neocortex, insula and cerebellum in elderly subjects, but not as frequently in limbic/paralimbic structures. However, it is unclear whether such features are already present during earlier stages of adulthood, and if age-related GM changes may follow non-linear patterns at such age range. This voxel-based morphometry study investigated the relationship between GM volumes and age specifically during non-elderly life (18-50 years) in 89 healthy individuals (48 males and 41 females). Voxelwise analyses showed significant (p < 0.05, corrected) negative correlations in the right prefrontal cortex and left cerebellum, and positive correlations (indicating lack of GM loss) in the medial temporal region, cingulate gyrus, insula and temporal neocortex. Analyses using ROI masks showed that age-related dorsolateral prefrontal volume decrements followed non-linear patterns, and were less prominent in females compared to males at this age range. These findings further support for the notion of a heterogeneous and asynchronous pattern of age-related brain morphometric changes, with region-specific non-linear features. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Quantum feedback can stabilize a two-level atom against decoherence (spontaneous emission), putting it into an arbitrary (specified) pure state. This requires perfect homodyne detection of the atomic emission, and instantaneous feedback. Inefficient detection was considered previously by two of us. Here we allow for a non-zero delay time tau in the feedback circuit. Because a two-level atom is a non-linear optical system, an analytical solution is not possible. However, quantum trajectories allow a simple numerical simulation of the resulting non-Markovian process. We find the effect of the time delay to be qualitatively similar to chat of inefficient detection. The solution of the non-Markovian quantum trajectory will not remain fixed, so that the time-averaged state will be mixed, not pure. In the case where one tries to stabilize the atom in the excited state, an approximate analytical solution to the quantum trajectory is possible. The result, that the purity (P = 2Tr[rho (2)] - 1) of the average state is given by P = 1 - 4y tau (where gamma is the spontaneous emission rate) is found to agree very well with the numerical results. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
In modeling expectation formation, economic agents are usually viewed as forming expectations adaptively or in accordance with some rationality postulate. We offer an alternative nonlinear model where agents exchange their opinions and information with each other. Such a model yields multiple equilibria, or attracting distributions, that are persistent but subject to sudden large jumps. Using German Federal Statistical Office economic indicators and German IFO Poll expectational data, we show that this kind of model performs well in simulation experiments. Focusing upon producers' expectations in the consumption goods sector, we also discover evidence that structural change in the interactive process occurred over the period of investigation (1970-1998). Specifically, interactions in expectation formation seem to have become less important over time.
Resumo:
The bulk free radical copolymerization of 2-hydroxyethyl methacrylate (HEMA) with N-vinyl-2-pyrrolidone (VP) was carried out to low conversions at 50 degreesC, using benzoyl peroxide (BPO) as initiator. The compositions of the copolymers; were determined using C-13 NMR spectroscopy. The conversion of monomers to polymers was studied using FT-NIR spectroscopy in order to predict the extent of conversion of monomer to polymer. From model fits to the composition data, a statistical F-test revealed that die penultimate model describes die copolymerization better than die terminal model. Reactivity ratios were calculated by using a non-linear least squares analysis (NLLS) and r(H) = 8.18 and r(V) = 0.097 were found to be the best fit values of the reactivity ratios for the terminal model and r(HH) = 12.0, r(VH) = 2.20, r(VV) = 0.12 and r(HV) = 0.03 for the penultimate model. Predictions were made for changes in compositions as a function of conversion based upon the terminal and penultimate models.
Resumo:
In this paper, we consider testing for additivity in a class of nonparametric stochastic regression models. Two test statistics are constructed and their asymptotic distributions are established. We also conduct a small sample study for one of the test statistics through a simulated example. (C) 2002 Elsevier Science (USA).
Resumo:
The rheological behaviour of nine unprocessed Australian honeys was investigated for the applicability of the Williams-Landel-Ferry (WLF) model. The viscosity of the honeys was obtained over a range of shear rates (0.01-40 s(-1)) from 2degrees to 40 degreesC, and all the honeys exhibited Newtonian behaviour with viscosity reducing as the temperature was increased. The honeys with high moisture were of lower viscosity, The glass transition temperatures of the honeys, as measured with a differential scanning calorimeter (DSC), ranged from -40degrees to -46 degreesC, and four models (WLF. Arrhenius, Vogel-Tammann-Fulcher (VTF), and power-law) were investigated to describe the temperature dependence of the viscosity. The WLF was the most suitable and the correlation coefficient averaged 0.999 +/- 0.0013 as against 0.996 +/- 0.0042 for the Arrhenius model while the mean relative deviation modulus was 0-12% for the WLF model and 10-40% for the Arrhenius one. With the universal values for the WLF constants, the temperature dependence of the viscosity was badly predicted. From non-linear regression analysis, the constants of the WLF models for the honeys were obtained (C-1 = 13.7-21.1: C-2 = 55.9-118.7) and are different from the universal values. These WLF constants will be valuable for adequate modeling of the rheology of the honeys, and they can be used to assess the temperature sensitivity of the honeys. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.