906 resultados para Partition of unity implicits
Resumo:
Throughout the Christian story, Church doctrine and ecclesiology have been shrouded in controversy. From the Council of Nicea in 325, when are early Church fathers debated about the Trinity of Christ all the way to the modern day with Vatican II theological controversies have been important in the molding of Christian doctrine on the structure, role, and function of the Church. What makes those controversies different from the ones I treat in my thesis is that the previously mentioned controversies did not lead to schismatic divisions in the Church. The Donatist controversy and Luther's theological battle with Karlstadt were major movements that endangered the unity of the Church. These controversies propagated crucial writings and teachings in two major areas. The first area is the spiritual power and validity of the sacraments. Second is the role, function, and ecclesiology of the Church, with particular attention to the authority of the ministry. I want to demonstrate that these controversies refined the Church's thinking on sacramental issues such as baptism and Eucharist, as well as address the question of who has the power in the Church? And to what extent do they have the power to press reforms?
Resumo:
After more than forty years studying growth, there are two classes of growth models that have emerged: exogenous and endogenous growth models. Since both try to mimic the same set of long-run stylized facts, they are observationally equivalent in some respects. Our goals in this paper are twofold First, we discuss the time-series properties of growth models in a way that is useful for assessing their fit to the data. Second, we investigate whether these two models successfully conforms to U.S. post-war data. We use cointegration techniques to estimate and test long-run capital elasticities, exogeneity tests to investigate the exogeneity status of TFP, and Granger-causality tests to examine temporal precedence of TFP with respect to infrastructure expenditures. The empirical evidence is robust in confirming the existence of a unity long-run capital elasticity. The analysis of TFP reveals that it is not weakly exogenous in the exogenous growth model Granger-causality test results show unequivocally that there is no evidence that TFP for both models precede infrastructure expenditures not being preceded by it. On the contrary, we find some evidence that infras- tructure investment precedes TFP. Our estimated impact of infrastructure on TFP lay rougbly in the interval (0.19, 0.27).
Resumo:
On using McKenzie’s taxonomy of optimal accumulation in the longrun, we report a “uniform turnpike” theorem of the third kind in a model original to Robinson, Solow and Srinivasan (RSS), and further studied by Stiglitz. Our results are presented in the undiscounted, discrete-time setting emphasized in the recent work of Khan-Mitra, and they rely on the importance of strictly concave felicity functions, or alternatively, on the value of a “marginal rate of transformation”, ξσ, from one period to the next not being unity. Our results, despite their specificity, contribute to the methodology of intertemporal optimization theory, as developed in economics by Ramsey, von Neumann and their followers.
Resumo:
The initial endogenous growth models emphasized the importance of externaI effects in explaining sustainable growth across time. Empirically, this hypothesis can be confirmed if the coefficient of physical capital per hour is unity in the aggregate production function. Although cross-section results concur with theory, previous estimates using time series data rejected this hypothesis, showing a small coefficient far from unity. It seems that the problem lies not with the theory but with the techniques employed, which are unable to capture low frequency movements in high frequency data. This paper uses cointegration - a technique designed to capture the existence of long-run relationships in multivariate time series - to test the externalities hypothesis of endogenous growth. The results confirm the theory' and conform to previous cross-section estimates. We show that there is long-run proportionality between output per hour and a measure of capital per hour. U sing this result, we confmn the hypothesis that the implied Solow residual can be explained by government expenditures on infra-structure, which suggests a supply side role for government affecting productivity and a decrease on the extent that the Solow residual explains the variation of output.
Resumo:
The literature on the welfare costs of in‡ation universally assumes that the many-person household can be treated as a single economic agent. This paper explores what the heterogeneity of the agents in a household might imply for such welfare analyses. First, we show that allowing for a single-unity or for a multi-unity transacting technology impacts the money demand function and, therefore, the welfare costs of in‡ation. Second, we derive su¢cient conditions that make the welfare assessments which depart directly from the knowledge of the money demand function (as in Lucas (2000)) robust under this alternative setting. Third, we compare our general-equilibrium measure with Bailey’s (1956) partial-equilibrium one.
Resumo:
We study constrained efficient aggregate risk sharing and its consequence for the behavior of macro-aggregates in a dynamic Mirrlees’s (1971) setting. Privately observed idiosyncratic productivity shocks are assumed to be independent of i.i.d. publicly observed aggregate shocks. Yet, private allocations display memory with respect to past aggregate shocks, when idosyncratic shocks are also i.i.d.. Under a mild restriction on the nature of optimal allocations the result extends to more persistent idiosyncratic shocks, for all but the limit at which idiosyncratic risk disappears, and the model collapses to a pure heterogeneity repeated Mirrlees economy identical to Werning [2007]. When preferences are iso-elastic we show that an allocation is memoryless only if it displays a strong form of separability with respect to aggregate shocks. Separability characterizes the pure heterogeneity limit as well as the general case with log preferences. With less than full persistence and risk aversion different from unity both memory and non-separability characterize optimal allocations. Exploiting the fact that non-separability is associated with state-varying labor wedges, we apply a business cycle accounting procedure (e.g. Chari et al. [2007]) to the aggregate data generated by the model. We show that, whenever risk aversion is great than one our model produces efficient counter-cyclical labor wedges.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A main purpose of a mathematical nutrition model (a.k.a., feeding systems) is to provide a mathematical approach for determining the amount and composition of the diet necessary for a certain level of animal productive performance. Therefore, feeding systems should be able to predict voluntary feed intake and to partition nutrients into different productive functions and performances. In the last decades, several feeding systems for goats have been developed. The objective of this paper is to compare and evaluate the main goat feeding systems (AFRC, CSIRO, NRC, and SRNS), using data of individual growing goat kids from seven studies conducted in Brazil. The feeding systems were evaluated by regressing the residuals (observed minus predicted) on the predicted values centered on their means. The comparisons showed that these systems differ in their approach for estimating dry matter intake (DMI) and energy requirements for growing goats. The AFRC system was the most accurate for predicting DMI (mean bias = 91 g/d, P < 0.001; linear bias 0.874). The average ADG accounted for a large part of the bias in the prediction of DMI by CSIRO, NRC, and, mainly, AFRC systems. The CSIRO model gave the most accurate predictions of ADG when observed DMI was used as input in the models (mean bias 12 g/d, P < 0.001; linear bias -0.229). while the AFRC was the most accurate when predicted DMI was used (mean bias 8g/d. P > 0.1; linear bias -0.347). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The Topliss method was used to guide a synthetic path in support of drug discovery efforts toward the identification of potent antimycobacterial agents. Salicylic acid and its derivatives, p-chloro, p-methoxy, and m-chlorosalicylic acid, exemplify a series of synthetic compounds whose minimum inhibitory concentrations for a strain of Mycobacterium were determined and compared to those of the reference drug, p-aminosalicylic acid. Several physicochemical descriptors (including Hammett's sigma constant, ionization constant, dipole moment, Hansch constant, calculated partition coefficient, Sterimol-L and -B-4 and molecular volume) were considered to elucidate structure-activity relationships. Molecular electrostatic potential and molecular dipole moment maps were also calculated using the AM1 semi-empirical method. Among the new derivatives, m-chlorosalicylic acid showed the lowest minimum inhibitory concentration. The overall results suggest that both physicochemical properties and electronic features may influence the biological activity of this series of antimycobacterial agents and thus should be considered in designing new p-aminosalicylic acid analogs.
Resumo:
1 the actions of the alpha(1)-adrenoceptor antagonist indoramin have been examined against the contractions induced by noradrenaline in the rat vas deferens and aorta taking into account a putative neuronal uptake blocking activity of this antagonist which could. result in self-cancelling actions.2 Indoramin behaved as a simple competitive antagonist of the contractions induced by noradrenaline in the vas deferens and aorta yielding pA(2) values of 7.38 +/- 0.05 (slope = 0.98 +/- 0.03) and 6.78 +/- 0.14 (slope = 1.08 +/- 0.06), respectively.3 When the experiments were repeated in the presence of cocaine (6 mu M) the potency (pA(2)) of indoramin in antagonizing the contractions of the vas deferens to noradrenaline was increased to 8.72 +/- 0.07 (slope = 1.10 +/- 0.05) while its potency remained unchanged in the aorta (pA(2) = 6.69 +/- 0.12; slope = 1.04 +/- 0.05).4 In denervated vas deferens, indoramin antagonized the contractions to noradrenaline with a potency similar to that found in the presence of cocaine (8.79 +/- 0.07; slope = 1.09 +/- 0.06).5 It is suggested that indoramin blocks alpha(1)-adrenoceptors and neuronal uptake in rat vas deferens resulting in Schild plots with slopes not different from unity even in the absence of selective inhibition of neuronal uptake. As a major consequence of this double mechanism of action, the pA(2) values for this antagonist are underestimated when calculated in situations where the neuronal uptake is active, yielding spurious pK(B) values.
Resumo:
This study tested the use of ventilatory frequency (VF) as an indicator of stress in the Nile tilapia, Oreochromis niloticus (L.). Firstly, we tested the relationship between VF and plasma cortisol after confinement. Confined fish showed higher VF and plasma cortisol levels, but the latter continued to increase significantly for longer time than VF. Secondly, we conducted another experiment to test the use of VF as indicator of fish stress. In four out of six treatment, we confined the fish for different intervals (30 s, 5, 15 or 30 min). The others were used as control. In one, no handling was imposed. The other control consisted of introducing the partition (the same used to perform the confinement) into the aquarium for less than 4 s, without confinement and immediately removing the partition (partition control). Ventilatory frequency was increased for the partition control as much as for the longer duration of confinement. This clearly indicates that VF is a very sensitivity response to disturbance, but of limited use because this parameter does not reflect the severity of the stimulus. Thus, although VF is a non-invasive technique that does not require sophisticated recording equipment, its usefulness is limited. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Several factors render carotenoid determination inherently difficult. Thus, in spite of advances in analytical instrumentation, discrepancies in quantitative results on carotenoids can be encountered in the international literature. A good part of the errors comes from the pre-chromatographic steps such as sampling scheme that does not yield samples representative of the food lots under investigation; sample preparation which does not maintain representativity and guarantee homogeneity of the analytical sample; incomplete extraction; physical losses of carotenoids during the various steps, especially during partition or washing and by adsorption to glass walls of containers; isomerization and oxidation of carotenoids during analysis. on the otherhand, although currently considered the method of choice for carotenoids, high performance liquid chromatography (HPLC) is subject to various sources of errors, such as: incompatibility of the injection solvent and the mobile phase, resulting in distorted or split peaks; erroneous identification; unavailability, impurity and instability of carotenoid standards; quantification of highly overlapping peaks; low recovery from the HPLC column; errors in the preparation of standard solutions and in the calibration procedure; calculation errors. Illustrations of the possible errors in the quantification of carotenoids by HPLC are presented.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)