973 resultados para First order autoregressive model AR (1)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Greenland NGRIP ice core continuously covers the period from present day back to 123 ka before present, which includes several thousand years of ice from the previous interglacial period, MIS 5e or the Eemian. In the glacial part of the core, annual layers can be identified from impurity records and visual stratigraphy, and stratigraphic layer counting has been performed back to 60 ka. In the deepest part of the core, however, the ice is close to the pressure melting point, the visual stratigraphy is dominated by crystal boundaries, and annual layering is not visible to the naked eye. In this study, we apply a newly developed setup for high-resolution ice core impurity analysis to produce continuous records of dust, sodium and ammonium concentrations as well as conductivity of melt water. We analyzed three 2.2 m sections of ice from the Eemian and the glacial inception. In all of the analyzed ice, annual layers can clearly be recognized, most prominently in the dust and conductivity profiles. Part of the samples is, however, contaminated in dust, most likely from drill liquid. It is interesting that the annual layering is preserved despite a very active crystal growth and grain boundary migration in the deep and warm NGRIP ice. Based on annual layer counting of the new records, we determine a mean annual layer thickness close to 11 mm for all three sections, which, to first order, confirms the modeled NGRIP time scale (ss09sea). The counting does, however, suggest a longer duration of the climatically warmest part of the NGRIP record (MIS5e) of up to 1 ka as compared to the model estimate. Our results suggest that stratigraphic layer counting is possible basically throughout the entire NGRIP ice core, provided sufficiently highly-resolved profiles become available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Bone morphogenetic protein (BMP) is a potent differentiating agent for cells of the osteoblastic lineage. It has been used in the oral cavity under a variety of indications and with different carriers. However, the optimal carrier for each indication is not known. This study examined a synthetic bioabsorbable carrier for BMP used in osseous defects around dental implants in the canine mandible. METHODS: Twelve canines had their mandibular four premolars and first molar teeth extracted bilaterally. After 5 months, four implants were placed with standardized circumferential defects around the coronal 4 mm of each implant. One-half of the defects received a polylactide/glycolide (PLGA) polymer carrier with or without recombinant human BMP-2 (rhBMP-2), and the other half received a collagen carrier with or without rhBMP-2. Additionally, one-half of the implants were covered with a non-resorbable (expanded polytetrafluoroethylene [ePTFE]) membrane to exclude soft tissues. Animals were sacrificed either 4 or 12 weeks later. Histomorphometric analysis included the percentage of new bone contact with the implant, the area of new bone, and the percentage of defect fill. This article describes results with the PLGA carrier. RESULTS: All implants demonstrated clinical and radiographic success with the amount of new bone formed dependent on the time and presence/absence of rhBMP-2 and presence/absence of a membrane. The percentage of bone-to-implant contact was greater with rhBMP-2, and after 12 weeks of healing, there was approximately one-third of the implant contacting bone in the defect site. After 4 weeks, the presence of a membrane appeared to slow new bone area formation. The percentage of fill in membrane-treated sites with rhBMP-2 rose from 24% fill to 42% after 4 and 12 weeks, respectively. Without rhBMP-2, the percentage of fill was 14% rising to 36% fill, respectively. CONCLUSIONS: After 4 weeks, the rhBMP-2-treated sites had a significantly higher percentage of contact, more new bone area, and higher percentage of defect fill than the sites without rhBMP-2. After 12 weeks, there was no significant difference in sites with or without rhBMP-2 regarding percentage of contact, new bone area, or percentage of defect fill. In regard to these three outcomes, comparing the results with this carrier to the results reported earlier with a collagen carrier in this study, only the area of new bone was significantly different with the collagen carrier resulting in greater bone than the PLGA carrier. Thus, the PLGA carrier for rhBMP-2 significantly stimulated bone formation around dental implants in this model after 1 month but not after 3 months of healing. The use of this growth factor and carrier combination appears to stimulate early bone healing events around the implants but not quite to the same degree as a collagen carrier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pharmacokinetics of ketamine and norketamine enantiomers after administration of intravenous (IV) racemic ketamine (R-/S-ketamine; 2.2mg/kg) or S-ketamine (1.1mg/kg) to five ponies sedated with IV xylazine (1.1mg/kg) were compared. The time intervals to assume sternal and standing positions were recorded. Arterial blood samples were collected before and 1, 2, 4, 6, 8 and 13min after ketamine administration. Arterial blood gases were evaluated 5min after ketamine injection. Plasma concentrations of ketamine and norketamine enantiomers were determined by capillary electrophoresis and were evaluated by non-linear least square regression analysis applying a monocompartmental model. The first-order elimination rate constant was significantly higher and elimination half-life and mean residence time were lower for S-ketamine after S-ketamine compared to R-/S-ketamine administration. The maximum concentration of S-norketamine was higher after S-ketamine administration. Time to standing position was significantly diminished after S-ketamine compared to R-/S-ketamine. Blood gases showed low-degree hypoxaemia and hypercarbia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on rehabilitation showed that appropriate and repetitive mechanical movements can help spinal cord injured individuals to restore their functional standing and walking. The objective of this paper was to achieve appropriate and repetitive joint movements and approximately normal gait through the PGO by replicating normal walking, and to minimize the energy consumption for both patients and the device. A model based experimental investigative approach is presented in this dissertation. First, a human model was created in Ideas and human walking was simulated in Adams. The main feature of this model was the foot ground contact model, which had distributed contact points along the foot and varied viscoelasticity. The model was validated by comparison of simulated results of normal walking and measured ones from the literature. It was used to simulate current PGO walking to investigate the real causes of poor function of the current PGO, even though it had joint movements close to normal walking. The direct cause was one leg moving at a time, which resulted in short step length and no clearance after toe off. It can not be solved by simply adding power on both hip joints. In order to find a better answer, a PGO mechanism model was used to investigate different walking mechanisms by locking or releasing some joints. A trade-off between energy consumption, control complexity and standing position was found. Finally a foot release PGO virtual model was created and simulated and only foot release mechanism was developed into a prototype. Both the release mechanism and the design of foot release were validated through the experiment by adding the foot release on the current PGO. This demonstrated an advancement in improving functional aspects of the current PGO even without a whole physical model of foot release PGO for comparison.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n this paper, we propose a theoretical model to study the effect of income insecurity of parents and offspring on the child's residential choice. Parents are partially altruistic toward their children and will provide financial help to an independent child when her income is low relative to the parents'. We find that children of more altruistic parents are more likely to become independent. However, first-order stochastic dominance (FOSD) shifts in the distribution of the child's future income (or her parents') have ambiguous effects on the child's residential choice. Parental altruism is the very source of ambiguity in the results. If parents are selfish or the joint income distribution of parents and child places no mass on the region where transfers are provided, a FOSD shift in the distribution of the child's (parents') future income will reduce (raise) the child's current income threshold for independence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Genesis mission Solar Wind Concentrator was built to enhance fluences of solar wind by an average of 20x over the 2.3 years that the mission exposed substrates to the solar wind. The Concentrator targets survived the hard landing upon return to Earth and were used to determine the isotopic composition of solar-wind—and hence solar—oxygen and nitrogen. Here we report on the flight operation of the instrument and on simulations of its performance. Concentration and fractionation patterns obtained from simulations are given for He, Li, N, O, Ne, Mg, Si, S, and Ar in SiC targets, and are compared with measured concentrations and isotope ratios for the noble gases. Carbon is also modeled for a Si target. Predicted differences in instrumental fractionation between elements are discussed. Additionally, as the Concentrator was designed only for ions ≤22 AMU, implications of analyzing elements as heavy as argon are discussed. Post-flight simulations of instrumental fractionation as a function of radial position on the targets incorporate solar-wind velocity and angular distributions measured in flight, and predict fractionation patterns for various elements and isotopes of interest. A tighter angular distribution, mostly due to better spacecraft spin stability than assumed in pre-flight modeling, results in a steeper isotopic fractionation gradient between the center and the perimeter of the targets. Using the distribution of solar-wind velocities encountered during flight, which are higher than those used in pre-flight modeling, results in elemental abundance patterns slightly less peaked at the center. Mean fractionations trend with atomic mass, with differences relative to the measured isotopes of neon of +4.1±0.9 ‰/amu for Li, between -0.4 and +2.8 ‰/amu for C, +1.9±0.7‰/amu for N, +1.3±0.4 ‰/amu for O, -7.5±0.4 ‰/amu for Mg, -8.9±0.6 ‰/amu for Si, and -22.0±0.7 ‰/amu for S (uncertainties reflect Monte Carlo statistics). The slopes of the fractionation trends depend to first order only on the relative differential mass ratio, Δ m/ m. This article and a companion paper (Reisenfeld et al. 2012, this issue) provide post-flight information necessary for the analysis of the Genesis solar wind samples, and thus serve to complement the Space Science Review volume, The Genesis Mission (v. 105, 2003).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that exotic phases arise in generalized lattice gauge theories known as quantum link models in which classical gauge fields are replaced by quantum operators. While these quantum models with discrete variables have a finite-dimensional Hilbert space per link, the continuous gauge symmetry is still exact. An efficient cluster algorithm is used to study these exotic phases. The (2+1)-d system is confining at zero temperature with a spontaneously broken translation symmetry. A crystalline phase exhibits confinement via multi stranded strings between chargeanti-charge pairs. A phase transition between two distinct confined phases is weakly first order and has an emergent spontaneously broken approximate SO(2) global symmetry. The low-energy physics is described by a (2 + 1)-d RP(1) effective field theory, perturbed by a dangerously irrelevant SO(2) breaking operator, which prevents the interpretation of the emergent pseudo-Goldstone boson as a dual photon. This model is an ideal candidate to be implemented in quantum simulators to study phenomena that are not accessible using Monte Carlo simulations such as the real-time evolution of the confining string and the real-time dynamics of the pseudo-Goldstone boson.