853 resultados para Reduced physical models
Resumo:
The peroxisomal proliferating-activated receptors (PPARs) are lipid-sensing transcription factors that have a role in embryonic development, but are primarily known for modulating energy metabolism, lipid storage, and transport, as well as inflammation and wound healing. Currently, there is no consensus as to the overall combined function of PPARs and why they evolved. We hypothesize that the PPARs had to evolve to integrate lipid storage and burning with the ability to reduce oxidative stress, as energy storage is essential for survival and resistance to injury/infection, but the latter increases oxidative stress and may reduce median survival (functional longevity). In a sense, PPARs may be an evolutionary solution to something we call the 'hypoxia-lipid' conundrum, where the ability to store and burn fat is essential for survival, but is a 'double-edged sword', as fats are potentially highly toxic. Ways in which PPARs may reduce oxidative stress involve modulation of mitochondrial uncoupling protein (UCP) expression (thus reducing reactive oxygen species, ROS), optimising forkhead box class O factor (FOXO) activity (by improving whole body insulin sensitivity) and suppressing NFkB (at the transcriptional level). In light of this, we therefore postulate that inflammation-induced PPAR downregulation engenders many of the signs and symptoms of the metabolic syndrome, which shares many features with the acute phase response (APR) and is the opposite of the phenotype associated with calorie restriction and high FOXO activity. In genetically susceptible individuals (displaying the naturally mildly insulin resistant 'thrifty genotype'), suboptimal PPAR activity may follow an exaggerated but natural adipose tissue-related inflammatory signal induced by excessive calories and reduced physical activity, which normally couples energy storage with the ability to mount an immune response. This is further worsened when pancreatic decompensation occurs, resulting in gluco-oxidative stress and lipotoxicity, increased inflammatory insulin resistance and oxidative stress. Reactivating PPARs may restore a metabolic balance and help to adapt the phenotype to a modern lifestyle.
Resumo:
We study the solutions of the Smoluchowski coagulation equation with a regularization term which removes clusters from the system when their mass exceeds a specified cutoff size, M. We focus primarily on collision kernels which would exhibit an instantaneous gelation transition in the absence of any regularization. Numerical simulations demonstrate that for such kernels with monodisperse initial data, the regularized gelation time decreasesas M increases, consistent with the expectation that the gelation time is zero in the unregularized system. This decrease appears to be a logarithmically slow function of M, indicating that instantaneously gelling kernels may still be justifiable as physical models despite the fact that they are highly singular in the absence of a cutoff. We also study the case when a source of monomers is introduced in the regularized system. In this case a stationary state is reached. We present a complete analytic description of this regularized stationary state for the model kernel, K(m1,m2)=max{m1,m2}ν, which gels instantaneously when M→∞ if ν>1. The stationary cluster size distribution decays as a stretched exponential for small cluster sizes and crosses over to a power law decay with exponent ν for large cluster sizes. The total particle density in the stationary state slowly vanishes as [(ν−1)logM]−1/2 when M→∞. The approach to the stationary state is nontrivial: Oscillations about the stationary state emerge from the interplay between the monomer injection and the cutoff, M, which decay very slowly when M is large. A quantitative analysis of these oscillations is provided for the addition model which describes the situation in which clusters can only grow by absorbing monomers.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
Motivados pelo debate envolvendo modelos estruturais e na forma reduzida, propomos nesse artigo uma abordagem empírica com o objetivo de ver se a imposição de restrições estruturais melhoram o poder de previsibilade vis-a-vis modelos irrestritos ou parcialmente restritos. Para respondermos nossa pergunta, realizamos previsões utilizando dados agregados de preços e dividendos de ações dos EUA. Nesse intuito, exploramos as restrições de cointegração, de ciclo comum em sua forma fraca e sobre os parâmetros do VECM impostas pelo modelo de Valor Presente. Utilizamos o teste de igualdade condicional de habilidade de previsão de Giacomini e White (2006) para comparar as previsões feitas por esse modelo com outros menos restritos. No geral, encontramos que os modelos com restrições parciais apresentaram os melhores resultados, enquanto o modelo totalmente restrito de VP não obteve o mesmo sucesso.
Resumo:
The employment of flexibility in the design of façades makes them adaptable to adverse weather conditions, resulting in both minimization of environmental discomfort and improvement of energy efficiency. The present study highlights the potential of flexible façades as a resource to reduce rigidity and form repetition, which are usually employed in condominiums of standardized houses; as such, the work presented herein contributes to field of study of architectural projects strategies for adapting and integrating buildings within the local climate context. Two façade options were designed using as reference the bionics and the kinetics, as well as their applications to architectural constructions. This resulted in two lightweight and dynamic structures, which cater to constraints of comfort through combinations of movements, which control the impact of solar radiation and of cooling in the environment. The efficacy and technical functionality of the façades were tested with comfort analysis and graphic computation software, as well as with physical models. Thus, the current research contributes to the improvement of architectural solutions aimed at using passive energy strategies in order to offer both better quality for the users and for the sustainability of the planet
Resumo:
The geological modeling allows, at laboratory scaling, the simulation of the geometric and kinematic evolution of geological structures. The importance of the knowledge of these structures grows when we consider their role in the creation of traps or conduits to oil and water. In the present work we simulated the formation of folds and faults in extensional environment, through physical and numerical modeling, using a sandbox apparatus and MOVE2010 software. The physical modeling of structures developed in the hangingwall of a listric fault, showed the formation of active and inactive axial zones. In consonance with the literature, we verified the formation of a rollover between these two axial zones. The crestal collapse of the anticline formed grabens, limited by secondary faults, perpendicular to the extension, with a curvilinear aspect. Adjacent to these faults we registered the formation of transversal folds, parallel to the extension, characterized by a syncline in the fault hangingwall. We also observed drag folds near the faults surfaces, these faults are parallel to the fault surface and presented an anticline in the footwall and a syncline hangingwall. To observe the influence of geometrical variations (dip and width) in the flat of a flat-ramp fault, we made two experimental series, being the first with the flat varying in dip and width and the second maintaining the flat variation in width but horizontal. These experiments developed secondary faults, perpendicular to the extension, that were grouped in three sets: i) antithetic faults with a curvilinear geometry and synthetic faults, with a more rectilinear geometry, both nucleated in the base of sedimentary pile. The normal antithetic faults can rotate, during the extension, presenting a pseudo-inverse kinematics. ii) Faults nucleated at the top of the sedimentary pile. The propagation of these faults is made through coalescence of segments, originating, sometimes, the formation of relay ramps. iii) Reverse faults, are nucleated in the flat-ramp interface. Comparing the two models we verified that the dip of the flat favors a differentiated nucleation of the faults at the two extremities of the mater fault. V These two flat-ramp models also generated an anticline-syncline pair, drag and transversal folds. The anticline was formed above the flat being sub-parallel to the master fault plane, while the syncline was formed in more distal areas of the fault. Due the geometrical variation of these two folds we can define three structural domains. Using the physical experiments as a template, we also made numerical modeling experiments, with flat-ramp faults presenting variation in the flat. Secondary antithetic, synthetic and reverse faults were generated in both models. The numerical modeling formed two folds, and anticline above the flat and a syncline further away of the master fault. The geometric variation of these two folds allowed the definition of three structural domains parallel to the extension. These data reinforce the physical models. The comparisons between natural data of a flat-ramp fault in the Potiguar basin with the data of physical and numerical simulations, showed that, in both cases, the variation of the geometry of the flat produces, variation in the hangingwall geometry
Resumo:
The Kaup-Newell (KN) hierarchy contains the derivative nonlinear Schrödinger equation (DNLSE) amongst others interesting and important nonlinear integrable equations. In this paper, a general higher grading affine algebraic construction of integrable hierarchies is proposed and the KN hierarchy is established in terms of an Ŝℓ2Kac-Moody algebra and principal gradation. In this form, our spectral problem is linear in the spectral parameter. The positive and negative flows are derived, showing that some interesting physical models arise from the same algebraic structure. For instance, the DNLSE is obtained as the second positive, while the Mikhailov model as the first negative flows. The equivalence between the latter and the massive Thirring model is also explicitly demonstrated. The algebraic dressing method is employed to construct soliton solutions in a systematic manner for all members of the hierarchy. Finally, the equivalence of the spectral problem introduced in this paper with the usual one, which is quadratic in the spectral parameter, is achieved by setting a particular automorphism of the affine algebra, which maps the homogeneous into principal gradation. © 2013 IOP Publishing Ltd.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)