1000 resultados para mechanistic modeling
Resumo:
When an asphalt mixture is subjected to a destructive compressive load, it experiences a sequence of three deformation stages, as follows: the (1) primary, (2) secondary, and (3) tertiary stages. Most literature research focuses on plastic deformation in the primary and secondary stages, such as prediction of the flow number, which is in fact the initiation of the tertiary stage. However, little research effort has been reported on the mechanistic modeling of the damage that occurs in the tertiary stage. The main objective of this paper is to provide a mechanistic characterizing method for the damage modeling of asphalt mixtures in the tertiary stage. The preliminary study conducted by the writers illustrates that deformation during the tertiary flow of the asphalt mixtures is principally caused by the formation and propagation of cracks, which was signaled by the increase of the phase angle in the tertiary phase. The strain caused by the growth of cracks is the viscofracture strain, which can be obtained by conducting the strain decomposition of the measured total strain in the destructive compressive test. The viscofracture strain is employed in the research reported in this paper to mechanistically characterize the time-dependent fracture (viscofracture) of asphalt mixtures in compression. By using the dissipated pseudostrain energy-balance principle, the damage density and true stress are determined and both are demonstrated to increase with load cycles in the tertiary stage. The increased true stress yields extra viscoplastic strain, which is the reason why the permanent deformation is accelerated by the occurrence of cracks. To characterize the evolution of the viscofracture in the asphalt mixtures in compression, a pseudo J-integral Paris' law in terms of damage density is proposed and the material constants in the Paris' law are determined, which can be employed to predict the fracture of asphalt mixtures in compression. © 2013 American Society of Civil Engineers.
Resumo:
A novel mechanistic model for the saccharification of cellulose and hemicellulose is utilized to predict the products of hydrolysis over a range of enzyme loadings and times. The mechanistic model considers the morphology of the substrate and the kinetics of enzymes to optimize enzyme concentrations for the enzymatic hydrolysis of cellulose and hemicellulose simultaneously. Substrates are modeled based on their fraction of accessible sites, glucan content, xylan content, and degree of polymerizations. This enzyme optimization model takes into account the kinetics of six core enzymes for lignocellulose hydrolysis: endoglucanase I (EG1), cellobiohydrolase I (CBH1), cellobiohydrolase II (CBH2), and endo-xylanase (EX) from Trichoderma reesei; β-glucosidase (BG), and β-xylosidase (BX) from Aspergillus niger. The model employs the synergistic action of these enzymes to predict optimum enzyme concentrations for hydrolysis of Avicel and ammonia fiber explosion (AFEX) pretreated corn stover. Glucan, glucan + xylan, glucose and glucose + xylose conversion predictions are given over a range of mass fractions of enzymes, and a range of enzyme loadings. Simulation results are compared with optimizations using statistically designed experiments. BG and BX are modeled in solution at later time points to predict the effect on glucose conversion and xylose conversion.
Resumo:
The validity of approximating radiative heating rates in the middle atmosphere by a local linear relaxation to a reference temperature state (i.e., ‘‘Newtonian cooling’’) is investigated. Using radiative heating rate and temperature output from a chemistry–climate model with realistic spatiotemporal variability and realistic chemical and radiative parameterizations, it is found that a linear regressionmodel can capture more than 80% of the variance in longwave heating rates throughout most of the stratosphere and mesosphere, provided that the damping rate is allowed to vary with height, latitude, and season. The linear model describes departures from the climatological mean, not from radiative equilibrium. Photochemical damping rates in the upper stratosphere are similarly diagnosed. Threeimportant exceptions, however, are found.The approximation of linearity breaks down near the edges of the polar vortices in both hemispheres. This nonlinearity can be well captured by including a quadratic term. The use of a scale-independentdamping rate is not well justified in the lower tropical stratosphere because of the presence of a broad spectrum of vertical scales. The local assumption fails entirely during the breakup of the Antarctic vortex, where large fluctuations in temperature near the top of the vortex influence longwave heating rates within the quiescent region below. These results are relevant for mechanistic modeling studies of the middle atmosphere, particularly those investigating the final Antarctic warming.
Resumo:
We present a mechanistic modeling methodology to predict both the percolation threshold and effective conductivity of infiltrated Solid Oxide Fuel Cell (SOFC) electrodes. The model has been developed to mirror each step of the experimental fabrication process. The primary model output is the infiltrated electrode effective conductivity which provides results over a range of infiltrate loadings that are independent of the chosen electronically conducting material. The percolation threshold is utilized as a valuable output data point directly related to the effective conductivity to compare a wide range of input value choices. The predictive capability of the model is demonstrated by favorable comparison to two separate published experimental studies, one using strontium molybdate and one using La0.8Sr0.2FeO3-δ as infiltrate materials. Effective conductivities and percolation thresholds are shown for varied infiltrate particle size, pore size, and porosity with the infiltrate particle size having the largest impact on the results.
Resumo:
We present a mechanistic modeling methodology to predict both the percolation threshold and effective conductivity of infiltrated Solid Oxide Fuel Cell (SOFC) electrodes. The model has been developed to mirror each step of the experimental fabrication process. The primary model output is the infiltrated electrode effective conductivity which provides results over a range of infiltrate loadings that are independent of the chosen electronically conducting material. The percolation threshold is utilized as a valuable output data point directly related to the effective conductivity to compare a wide range of input value choices. The predictive capability of the model is demonstrated by favorable comparison to two separate published experimental studies, one using strontium molybdate and one using La0.8Sr0.2FeO3-delta as infiltrate materials. Effective conductivities and percolation thresholds are shown for varied infiltrate particle size, pore size, and porosity with the infiltrate particle size having the largest impact on the results. (C) 2013 The Electrochemical Society. All rights reserved.
Resumo:
RNA viruses are an important cause of global morbidity and mortality. The rapid evolutionary rates of RNA virus pathogens, caused by high replication rates and error-prone polymerases, can make the pathogens difficult to control. RNA viruses can undergo immune escape within their hosts and develop resistance to the treatment and vaccines we design to fight them. Understanding the spread and evolution of RNA pathogens is essential for reducing human suffering. In this dissertation, I make use of the rapid evolutionary rate of viral pathogens to answer several questions about how RNA viruses spread and evolve. To address each of the questions, I link mathematical techniques for modeling viral population dynamics with phylogenetic and coalescent techniques for analyzing and modeling viral genetic sequences and evolution. The first project uses multi-scale mechanistic modeling to show that decreases in viral substitution rates over the course of an acute infection, combined with the timing of infectious hosts transmitting new infections to susceptible individuals, can account for discrepancies in viral substitution rates in different host populations. The second project combines coalescent models with within-host mathematical models to identify driving evolutionary forces in chronic hepatitis C virus infection. The third project compares the effects of intrinsic and extrinsic viral transmission rate variation on viral phylogenies.
Resumo:
Cementitious stabilization of aggregates and soils is an effective technique to increase the stiffness of base and subbase layers. Furthermore, cementitious bases can improve the fatigue behavior of asphalt surface layers and subgrade rutting over the short and long term. However, it can lead to additional distresses such as shrinkage and fatigue in the stabilized layers. Extensive research has tested these materials experimentally and characterized them; however, very little of this research attempts to correlate the mechanical properties of the stabilized layers with their performance. The Mechanistic Empirical Pavement Design Guide (MEPDG) provides a promising theoretical framework for the modeling of pavements containing cementitiously stabilized materials (CSMs). However, significant improvements are needed to bring the modeling of semirigid pavements in MEPDG to the same level as that of flexible and rigid pavements. Furthermore, the MEPDG does not model CSMs in a manner similar to those for hot-mix asphalt or portland cement concrete materials. As a result, performance gains from stabilized layers are difficult to assess using the MEPDG. The current characterization of CSMs was evaluated and issues with CSM modeling and characterization in the MEPDG were discussed. Addressing these issues will help designers quantify the benefits of stabilization for pavement service life.
Resumo:
The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Models of codon evolution have attracted particular interest because of their unique capabilities to detect selection forces and their high fit when applied to sequence evolution. We described here a novel approach for modeling codon evolution, which is based on Kronecker product of matrices. The 61 × 61 codon substitution rate matrix is created using Kronecker product of three 4 × 4 nucleotide substitution matrices, the equilibrium frequency of codons, and the selection rate parameter. The entities of the nucleotide substitution matrices and selection rate are considered as parameters of the model, which are optimized by maximum likelihood. Our fully mechanistic model allows the instantaneous substitution matrix between codons to be fully estimated with only 19 parameters instead of 3,721, by using the biological interdependence existing between positions within codons. We illustrate the properties of our models using computer simulations and assessed its relevance by comparing the AICc measures of our model and other models of codon evolution on simulations and a large range of empirical data sets. We show that our model fits most biological data better compared with the current codon models. Furthermore, the parameters in our model can be interpreted in a similar way as the exchangeability rates found in empirical codon models.
Resumo:
Nuclear receptors are a major component of signal transduction in animals. They mediate the regulatory activities of many hormones, nutrients and metabolites on the homeostasis and physiology of cells and tissues. It is of high interest to model the corresponding regulatory networks. While molecular and cell biology studies of individual promoters have provided important mechanistic insight, a more complex picture is emerging from genome-wide studies. The regulatory circuitry of nuclear receptor regulated gene expression networks, and their response to cellular signaling, appear highly dynamic, and involve long as well as short range chromatin interactions. We review how progress in understanding the kinetics and regulation of cofactor recruitment, and the development of new genomic methods, provide opportunities but also a major challenge for modeling nuclear receptor mediated regulatory networks.
Resumo:
Pluripotency in human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) is regulated by three transcription factors-OCT3/4, SOX2, and NANOG. To fully exploit the therapeutic potential of these cells it is essential to have a good mechanistic understanding of the maintenance of self-renewal and pluripotency. In this study, we demonstrate a powerful systems biology approach in which we first expand literature-based network encompassing the core regulators of pluripotency by assessing the behavior of genes targeted by perturbation experiments. We focused our attention on highly regulated genes encoding cell surface and secreted proteins as these can be more easily manipulated by the use of inhibitors or recombinant proteins. Qualitative modeling based on combining boolean networks and in silico perturbation experiments were employed to identify novel pluripotency-regulating genes. We validated Interleukin-11 (IL-11) and demonstrate that this cytokine is a novel pluripotency-associated factor capable of supporting self-renewal in the absence of exogenously added bFGF in culture. To date, the various protocols for hESCs maintenance require supplementation with bFGF to activate the Activin/Nodal branch of the TGFβ signaling pathway. Additional evidence supporting our findings is that IL-11 belongs to the same protein family as LIF, which is known to be necessary for maintaining pluripotency in mouse but not in human ESCs. These cytokines operate through the same gp130 receptor which interacts with Janus kinases. Our finding might explain why mESCs are in a more naïve cell state compared to hESCs and how to convert primed hESCs back to the naïve state. Taken together, our integrative modeling approach has identified novel genes as putative candidates to be incorporated into the expansion of the current gene regulatory network responsible for inducing and maintaining pluripotency.
Resumo:
How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model's prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information.
Resumo:
ABSTRACT This study aimed to verify the differences in radiation intensity as a function of distinct relief exposure surfaces and to quantify these effects on the leaf area index (LAI) and other variables expressing eucalyptus forest productivity for simulations in a process-based growth model. The study was carried out at two contrasting edaphoclimatic locations in the Rio Doce basin in Minas Gerais, Brazil. Two stands with 32-year-old plantations were used, allocating fixed plots in locations with northern and southern exposure surfaces. The meteorological data were obtained from two automated weather stations located near the study sites. Solar radiation was corrected for terrain inclination and exposure surfaces, as it is measured based on the plane, perpendicularly to the vertical location. The LAI values collected in the field were used. For the comparative simulations in productivity variation, the mechanistic 3PG model was used, considering the relief exposure surfaces. It was verified that during most of the year, the southern surfaces showed lower availability of incident solar radiation, resulting in up to 66% losses, compared to the same surface considered plane, probably related to its geographical location and higher declivity. Higher values were obtained for the plantings located on the northern surface for the variables LAI, volume and mean annual wood increase, with this tendency being repeated in the 3PG model simulations.
Resumo:
The knowledge of the slug flow characteristics is very important when designing pipelines and process equipment. When the intermittences typical in slug flow occurs, the fluctuations of the flow variables bring additional concern to the designer. Focusing on this subject the present work discloses the experimental data on slug flow characteristics occurring in a large-size, large-scale facility. The results were compared with data provided by mechanistic slug flow models in order to verify their reliability when modelling actual flow conditions. Experiments were done with natural gas and oil or water as the liquid phase. To compute the frequency and velocity of the slug cell and to calculate the length of the elongated bubble and liquid slug one used two pressure transducers measuring the pressure drop across the pipe diameter at different axial locations. A third pressure transducer measured the pressure drop between two axial location 200 m apart. The experimental data were compared with results of Camargo's1 algorithm (1991, 1993), which uses the basics of Dukler & Hubbard's (1975) slug flow model, and those calculated by the transient two-phase flow simulator OLGA.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.