916 resultados para Pseudo-second-order kinetic models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous studies have either exclusively used annual tree-ring data or have combined tree-ring series with other, lower temporal resolution proxy series. Both approaches can lead to significant uncertainties, as tree-rings may underestimate the amplitude of past temperature variations, and the validity of non-annual records cannot be clearly assessed. In this study, we assembled 45 published Northern Hemisphere (NH) temperature proxy records covering the past millennium, each of which satisfied 3 essential criteria: the series must be of annual resolution, span at least a thousand years, and represent an explicit temperature signal. Suitable climate archives included ice cores, varved lake sediments, tree-rings and speleothems. We reconstructed the average annual land temperature series for the NH over the last millennium by applying 3 different reconstruction techniques: (1) principal components (PC) plus second-order autoregressive model (AR2), (2) composite plus scale (CPS) and (3) regularized errors-in-variables approach (EIV). Our reconstruction is in excellent agreement with 6 climate model simulations (including the first 5 models derived from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and an earth system model of intermediate complexity (LOVECLIM), showing similar temperatures at multi-decadal timescales; however, all simulations appear to underestimate the temperature during the Medieval Warm Period (MWP). A comparison with other NH reconstructions shows that our results are consistent with earlier studies. These results indicate that well-validated annual proxy series should be used to minimize proxy-based artifacts, and that these proxy series contain sufficient information to reconstruct the low-frequency climate variability over the past millennium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The factorial validity of the SF-36 was evaluated using confirmatory factor analysis (CFA) methods, structural equation modeling (SEM), and multigroup structural equation modeling (MSEM). First, the measurement and structural model of the hypothesized SF-36 was explicated. Second, the model was tested for the validity of a second-order factorial structure, upon evidence of model misfit, determined the best-fitting model, and tested the validity of the best-fitting model on a second random sample from the same population. Third, the best-fitting model was tested for invariance of the factorial structure across race, age, and educational subgroups using MSEM.^ The findings support the second-order factorial structure of the SF-36 as proposed by Ware and Sherbourne (1992). However, the results suggest that: (a) Mental Health and Physical Health covary; (b) general mental health cross-loads onto Physical Health; (c) general health perception loads onto Mental Health instead of Physical Health; (d) many of the error terms are correlated; and (e) the physical function scale is not reliable across these two samples. This hierarchical factor pattern was replicated across both samples of health care workers, suggesting that the post hoc model fitting was not data specific. Subgroup analysis suggests that the physical function scale is not reliable across the "age" or "education" subgroups and that the general mental health scale path from Mental Health is not reliable across the "white/nonwhite" or "education" subgroups.^ The importance of this study is in the use of SEM and MSEM in evaluating sample data from the use of the SF-36. These methods are uniquely suited to the analysis of latent variable structures and are widely used in other fields. The use of latent variable models for self reported outcome measures has become widespread, and should now be applied to medical outcomes research. Invariance testing is superior to mean scores or summary scores when evaluating differences between groups. From a practical, as well as, psychometric perspective, it seems imperative that construct validity research related to the SF-36 establish whether this same hierarchical structure and invariance holds for other populations.^ This project is presented as three articles to be submitted for publication. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to study further the long-range correlations ("ridge") observed recently in p+Pb collisions at sqrt(s_NN) =5.02 TeV, the second-order azimuthal anisotropy parameter of charged particles, v_2, has been measured with the cumulant method using the ATLAS detector at the LHC. In a data sample corresponding to an integrated luminosity of approximately 1 microb^(-1), the parameter v_2 has been obtained using two- and four-particle cumulants over the pseudorapidity range |eta|<2.5. The results are presented as a function of transverse momentum and the event activity, defined in terms of the transverse energy summed over 3.1models of p+Pb collisions. Despite the small transverse spatial extent of the p+Pb collision system, the large magnitude of v_2 and its similarity to hydrodynamic predictions provide additional evidence for the importance of final-state effects in p+Pb reactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two highly efficient (K2CO3/sludge carbon and ZnCl2/sludge carbon) solids were prepared by chemical addition following carbonization at 800 °C and were tested for anaerobic reduction of tartrazine dye in a continuous upflow packed-bed biological reactor, and their performance was compared to that of commercial activated carbon (CAC). The chemical and structural information of the solids was subjected to various characterizations in order to understand the mechanism for anaerobic decolorization, and efficiency for SBCZN800 and SBCPC800 materials was 87% and 74%, respectively, at a short space time (τ) of 2.0 min. A first-order kinetic model fitted the experimental points and kinetic constants of 0.40, 0.92 and 1.46 min(-1) were obtained for SBCZN800, SBCPC800 and CAC, respectively. The experimental results revealed that performance of solids in the anaerobic reduction of tartrazine dye can depend on several factors including chemical agents, carbonization, microbial population, chemical groups and surface chemistry. The Langmuir and Freundlich models are successfully described in the batch adsorption data. Based on these observations, a cost-effective sludge-based catalyst can be produced from harmful sewage sludge for the treatment of industrial effluents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The joint modeling of longitudinal and survival data is a new approach to many applications such as HIV, cancer vaccine trials and quality of life studies. There are recent developments of the methodologies with respect to each of the components of the joint model as well as statistical processes that link them together. Among these, second order polynomial random effect models and linear mixed effects models are the most commonly used for the longitudinal trajectory function. In this study, we first relax the parametric constraints for polynomial random effect models by using Dirichlet process priors, then three longitudinal markers rather than only one marker are considered in one joint model. Second, we use a linear mixed effect model for the longitudinal process in a joint model analyzing the three markers. In this research these methods were applied to the Primary Biliary Cirrhosis sequential data, which were collected from a clinical trial of primary biliary cirrhosis (PBC) of the liver. This trial was conducted between 1974 and 1984 at the Mayo Clinic. The effects of three longitudinal markers (1) Total Serum Bilirubin, (2) Serum Albumin and (3) Serum Glutamic-Oxaloacetic transaminase (SGOT) on patients' survival were investigated. Proportion of treatment effect will also be studied using the proposed joint modeling approaches. ^ Based on the results, we conclude that the proposed modeling approaches yield better fit to the data and give less biased parameter estimates for these trajectory functions than previous methods. Model fit is also improved after considering three longitudinal markers instead of one marker only. The results from analysis of proportion of treatment effects from these joint models indicate same conclusion as that from the final model of Fleming and Harrington (1991), which is Bilirubin and Albumin together has stronger impact in predicting patients' survival and as a surrogate endpoints for treatment. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the seventeenth of a series of symposia devoted to talks by students about their biochemical engineering research. The first, third, fifth, ninth, twelfth, and sixteenth were at Kansas State University, the second and fourth were at the University of Nebraska-Lincoln, the sixth was in Kansas City and was hosted by Iowa State University, the seventh, tenth, thirteenth, and seventeenth were at Iowa State University, the eighth and fourteenth were at the University of Missouri–Columbia, and the eleventh and fifteenth were at Colorado State University. Next year's symposium will be at the University of Colorado. Symposium proceedings are edited by faculty of the host institution. Because final publication usually takes place elsewhere, papers here are brief, and often cover work in progress. ContentsThe Effect of Polymer Dosage Conditions on the Properties of ProteinPolyelectrolyte Precipitates, K. H. Clark and C. E. Glatz, Iowa State University An Immobilized Enzyme Reactor/Separator for the Hydrolysis of Casein by Subtilisin Carlsberg, A. J. Bream, R. A. Yoshisato, and G. R. Carmichael, University of Iowa Cell Density Measurements in Hollow Fiber Bioreactors, Thomas Blute, Colorado State University The Hydrodynamics in an Air-Lift Reactor, Peter Sohn, George Y. Preckshot, and Rakesh K. Bajpai, University of Missouri–Columbia Local Liquid Velocity Measurements in a Split Cylinder Airlift Column, G. Travis Jones, Kansas State University Fluidized Bed Solid Substrate Trichoderma reesei Fermentation, S. Adisasmito, H. N. Karim, and R. P. Tengerdy, Colorado State University The Effect of 2,4-D Concentration on the Growth of Streptanthus tortuosis Cells in Shake Flask and Air-Lift Permenter Culture, I. C. Kong, R. D. Sjolund, and R. A. Yoshisato, University of Iowa Protein Engineering of Aspergillus niger Glucoamylase, Michael R. Sierks, Iowa State University Structured Kinetic Modeling of Hybidoma Growth and Monoclonal Antibody Production in Suspension Cultures, Brian C. Batt and Dhinakar S. Kampala, University of Colorado Modelling and Control of a Zymomonas mobilis Fermentation, John F. Kramer, M. N. Karim, and J. Linden, Colorado State University Modeling of Brettanomyces clausenii Fermentation on Mixtures of Glucose and Cellobiose, Max T. Bynum and Dhinakar S. Kampala, University of Colorado, Karel Grohmann and Charles E. Yyman, Solar Energy Research Institute Master Equation Modeling and Monte Carlo Simulation of Predator-Prey Interactions, R. 0. Fox, Y. Y. Huang, and L. T. Fan, Kansas State University Kinetics and Equilibria of Condensation Reactions Between Two Different Monosaccharides Catalyzed by Aspergillus niger Glucoamylase, Sabine Pestlin, Iowa State University Biodegradation of Metalworking Fluids, S. M. Lee, Ayush Gupta, L. E. Erickson, and L. T. Fan, Kansas State University Redox Potential, Toxicity and Oscillations in Solvent Fermentations, Kim Joong, Rakesh Bajpai, and Eugene L. Iannotti, University of Missouri–Columbia Using Structured Kinetic Models for Analyzing Instability in Recombinant Bacterial Cultures, William E. Bentley and Dhinakar S. Kompala, University of Colorado

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New trace element, Sr-, Nd-, Pb- and Hf isotope data provide insights into the evolution of the Tonga-Lau Basin subduction system. The involvement of two separate mantle domains, namely Pacific MORB mantle in the pre-rift and early stages of back-arc basin formation, and Indian MORB mantle in the later stages, is confirmed by these results. Contrary to models proposed in recent studies on the basis of Pb isotope and other compositional data, this change in mantle wedge character best explains the shift in the isotopic composition, particularly 143Nd/144Nd ratios, of modern Tofua Arc magmas relative to all other arc products from this region. Nevertheless, significant changes in the slab-derived flux during the evolution of the arc system are also required to explain second order variations in magma chemistry. In this region, the slab-derived flux is dominated by fluid; however, these fluids carry Pb with sediment-influenced isotopic signatures, indicating that their source is not restricted to the subducting altered mafic oceanic crust. This has been the case from the earliest magmatic activity in the arc (Eocene) until the present time, with the exception of two periods of magmatic activity recorded in samples from the Lau Islands. Both the Lau Volcanic Group, and Korobasaga Volcanic Group lavas preserve trace element and isotope evidence for a contribution from subducted sediment that was not transported as a fluid, but possibly in the form of a melt. This component shares similarities with that influencing the chemistry of the northern Tofua Arc magmas, suggesting some caution may be required in the adoption of constraints for the latter dependent upon the involvement of sediments from the Louisville Ridge. A key outcome of this study is to demonstrate that the models proposed to explain subduction zone magmatism cannot afford to ignore the small but important contributions made by the mantle wedge to the incompatible trace element inventory of arc magmas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the 2005 Miracle’s team approach to the Ad-Hoc Information Retrieval tasks. The goal for the experiments this year was twofold: to continue testing the effect of combination approaches on information retrieval tasks, and improving our basic processing and indexing tools, adapting them to new languages with strange encoding schemes. The starting point was a set of basic components: stemming, transforming, filtering, proper nouns extraction, paragraph extraction, and pseudo-relevance feedback. Some of these basic components were used in different combinations and order of application for document indexing and for query processing. Second-order combinations were also tested, by averaging or selective combination of the documents retrieved by different approaches for a particular query. In the multilingual track, we concentrated our work on the merging process of the results of monolingual runs to get the overall multilingual result, relying on available translations. In both cross-lingual tracks, we have used available translation resources, and in some cases we have used a combination approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internal Structure of Hydrogen-Air Diffusion Flames. Tho purpose of this paper is to study finite rate chemistry effects in diffusion controlled hydrogenair flames undor conditions appearing in some cases in a supersonic combustor. Since for large reaction rates the flame is close to chemical equilibrium, the reaction takes place in a very thin region, so thata "singular perturbation "treatment" of the problem seems appropriate. It has been shown previously that, within the inner or reaction zone, convection effects may be neglocted, the temperature is constant across the flame, and tho mass fraction distributions are given by ordinary differential equations, whore tho only independent variable involved is tho coordinate normal to the flame surface. Tho solution of the outer problom, which is a pure mixing problem with the additional condition that fuol and oxidizer do not coexist in any zone, provides t h e following information: tho flame position, rates of fuel consumption, temperature, concentrators of species, fluid velocity outside of tho flame, and the boundary conditions required to solve the "inner problem." The main contribution of this paper consists in the introduction of a fairly complicated chemical kinetic scheme representing hydrogen-oxygen reaction. The nonlinear equations expressing the conservation of chemical species are approximately integrated by means of an integral method. It has boen found that, in the case considered of a near-equilibrium diffusion flame, tho role played by the dissociation-recombination reactions is purely marginal, and that somo of the second order "shuffling" reactions are close to equilibrium. The method shown here may be applied to compute the distanco from the injector corresponding to a given separation from equilibrium, say ten to twenty percent. For the casos whore this length is a small fraction of the combustion zone length, the equilibrium treatment describes properly tho flame behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stereo video techniques are effective for estimating the space–time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space–time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space–time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space–time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mathematical model for the group combustion of pulverized coal particles was developed in a previous work. It includes the Lagrangian description of the dehumidification, devolatilization and char gasification reactions of the coal particles in the homogenized gaseous environment resulting from the three fuels, CO, H2 and volatiles, supplied by the gasification of the particles and their simultaneous group combustion by the gas phase oxidation reactions, which are considered to be very fast. This model is complemented here with an analysis of the particle dynamics, determined principally by the effects of aerodynamic drag and gravity, and its dispersion based on a stochastic model. It is also extended to include two other simpler models for the gasification of the particles: the first one for particles small enough to extinguish the surrounding diffusion flames, and a second one for particles with small ash content when the porous shell of ashes remaining after gasification of the char, non structurally stable, is disrupted. As an example of the applicability of the models, they are used in the numerical simulation of an experiment of a non-swirling pulverized coal jet with a nearly stagnant air at ambient temperature, with an initial region of interaction with a small annular methane flame. Computational algorithms for solving the different stages undergone by a coal particle during its combustion are proposed. For the partial differential equations modeling the gas phase, a second order finite element method combined with a semi-Lagrangian characteristics method are used. The results obtained with the three versions of the model are compared among them and show how the first of the simpler models fits better the experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a crop model for a specific objective, location and implementation scale is a difficult task. A look inside the different crop and soil modules to understand how outputs are obtained might facilitate model choice. The objectives of this paper were (i) to assess the usefulness of crop models to be used within a crop insurance analysis and design and (ii) to select the most suitable crop model for drought risk assessment in semi-arid regions in Spain. For that purpose first, a pre-selection of crop models simulating wheat yield under rainfed growing conditions at the field scale was made, and second, four selected models (Aquacrop, CERES- Wheat, CropSyst and WOFOST) were compared in terms of modelling approaches, process descriptions and model outputs. Outputs of the four models for the simulation of winter wheat growth are comparable when water is not limiting, but differences are larger when simulating yields under rainfed conditions. These differences in rainfed yields are mainly related to the dissimilar simulated soil water availability and the assumed linkages with dry matter formation. We concluded that for the simulation of winter wheat growth at field scale in such semi-arid conditions, CERES-Wheat and CropSyst are preferred. WOFOST is a satisfactory compromise between data availability and complexity when detail data on soil is limited. Aquacrop integrates physiological processes in some representative parameters, thus diminishing the number of input parameters, what is seen as an advantage when observed data is scarce. However, the high sensitivity of this model to low water availability limits its use in the region considered. Contrary to the use of ensembles of crop models, we endorse that efforts be concentrated on selecting or rebuilding a model that includes approaches that better describe the agronomic conditions of the regions in which they will be applied. The use of such complex methodologies as crop models is associated with numerous sources of uncertainty, although these models are the best tools available to get insight in these complex agronomic systems.