935 resultados para Leontief Input-Output model
Resumo:
In this article we extend the rational partisan model of Alesina and Gatti (1995) to include a second policy, fiscal policy, besides monetary policy. It is shown that, with this extension, the politically induced variance of output is not always eliminated nor reduced by delegating monetary policy to an independent and conservative central bank. Further, in flation and output stabilisation will be affected by the degree of conservativeness of the central bank and by the probability of the less in flation averse party gaining power. Keywords: rational partisan theory; fiscal policy; independent central bank JEL Classi fication: E58, E63.
Resumo:
A parametric procedure for the blind inversion of nonlinear channels is proposed, based on a recent method of blind source separation in nonlinear mixtures. Experiments show that the proposed algorithms perform efficiently, even in the presence of hard distortion. The method, based on the minimization of the output mutual information, needs the knowledge of log-derivative of input distribution (the so-called score function). Each algorithm consists of three adaptive blocks: one devoted to adaptive estimation of the score function, and two other blocks estimating the inverses of the linear and nonlinear parts of the channel, (quasi-)optimally adapted using the estimated score functions. This paper is mainly concerned by the nonlinear part, for which we propose two parametric models, the first based on a polynomial model and the second on a neural network, while [14, 15] proposed non-parametric approaches.
Resumo:
Motivation: Hormone pathway interactions are crucial in shaping plant development, such as synergism between the auxin and brassinosteroid pathways in cell elongation. Both hormone pathways have been characterized in detail, revealing several feedback loops. The complexity of this network, combined with a shortage of kinetic data, renders its quantitative analysis virtually impossible at present.Results: As a first step towards overcoming these obstacles, we analyzed the network using a Boolean logic approach to build models of auxin and brassinosteroid signaling, and their interaction. To compare these discrete dynamic models across conditions, we transformed them into qualitative continuous systems, which predict network component states more accurately and can accommodate kinetic data as they become available. To this end, we developed an extension for the SQUAD software, allowing semi-quantitative analysis of network states. Contrasting the developmental output depending on cell type-specific modulators enabled us to identify a most parsimonious model, which explains initially paradoxical mutant phenotypes and revealed a novel physiological feature.
Resumo:
This paper investigates the asymptotic uniform power allocation capacity of frequency nonselective multiple-inputmultiple-output channels with fading correlation at either thetransmitter or the receiver. We consider the asymptotic situation,where the number of inputs and outputs increase without boundat the same rate. A simple uniparametric model for the fadingcorrelation function is proposed and the asymptotic capacity perantenna is derived in closed form. Although the proposed correlationmodel is introduced only for mathematical convenience, itis shown that its shape is very close to an exponentially decayingcorrelation function. The asymptotic expression obtained providesa simple and yet useful way of relating the actual fadingcorrelation to the asymptotic capacity per antenna from a purelyanalytical point of view. For example, the asymptotic expressionsindicate that fading correlation is more harmful when arising atthe side with less antennas. Moreover, fading correlation does notinfluence the rate of growth of the asymptotic capacity per receiveantenna with high Eb /N0.
Resumo:
Résumé: Output, inflation and interest rates are key macroeconomic variables, in particular for monetary policy. In modern macroeconomic models they are driven by random shocks which feed through the economy in various ways. Models differ in the nature of shocks and their transmission mechanisms. This is the common theme underlying the three essays of this thesis. Each essay takes a different perspective on the subject: First, the thesis shows empirically how different shocks lead to different behavior of interest rates over the business cycle. For commonly analyzed shocks (technology and monetary policy errors), the patterns square with standard models. The big unknown are sources of inflation persistence. Then the thesis presents a theory of monetary policy, when the central bank can better observe structural shocks than the public. The public will then seek to infer the bank's extra knowledge from its policy actions and expectation management becomes a key factor of optimal policy. In a simple New Keynesian model, monetary policy becomes more concerned with inflation persistence than otherwise. Finally, the thesis points to the huge uncertainties involved in estimating the responses to structural shocks with permanent effects.
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
The Transtheoretical Model of behaviour change is currently one of the most promising models in terms of understanding and promoting behaviour change related to the acquisition of healthy living habits. By means of a bibliographic search of papers adopting a TTM approach to obesity, the present bibliometric study enables the scientific output in this field to be evaluated. The results obtained reveal a growing interest in applying this model to both the treatment of obesity and its prevention. Otherwise, author and journal outputs fit the models proposed by Lotka and Bradford, respectively.
Resumo:
Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets.
Resumo:
The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.
Resumo:
PURPOSE: Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. METHODS: The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. RESULTS: The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. CONCLUSION: The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
Resumo:
Nitric oxide (NO) produced by inducible NO synthase (iNOS, NOS-2) is an important component of the macrophage-mediated immune defense toward numerous pathogens. Murine macrophages produce NO after cytokine activation, whereas, under similar conditions, human macrophages produce low levels or no NO at all. Although human macrophages can express iNOS mRNA and protein on activation, whether they possess the complete machinery necessary for NO synthesis remains controversial. To define the conditions necessary for human monocytes/macrophages to synthesize NO when expressing a functional iNOS, the human monocytic U937 cell line was engineered to synthesize this enzyme, following infection with a retroviral expression vector containing human hepatic iNOS (DFGiNOS). Northern blot and Western blot analysis confirmed the expression of iNOS in transfected U937 cells both at the RNA and protein levels. NOS enzymatic activity was demonstrated in cell lysates by the conversion of L-[3H]arginine into L-[3H]citrulline and the production of NO by intact cells was measured by nitrite and nitrate accumulation in culture supernatants. When expressing functional iNOS, U937 cells were capable of releasing high levels of NO. NO production was strictly dependent on supplementation of the culture medium with tetrahydrobiopterin (BH4) and was not modified by stimulation of the cells with different cytokines. These observations suggest that (1) human monocytic U937 cells contain all the cofactors necessary for NO synthesis, except BH4 and (2) the failure to detect NO in cytokine-stimulated untransfected U937 cells is not due to the presence of a NO-scavenging molecule within these cells nor to the destabilization of iNOS protein. DFGiNOS U937 cells represent a valuable human model to study the role of NO in immunity toward tumors and pathogens.
Resumo:
The ability to recognize a shape is linked to figure-ground (FG) organization. Cell preferences appear to be correlated across contrast-polarity reversals and mirror reversals of polygon displays, but not so much across FG reversals. Here we present a network structure which explains both shape-coding by simulated IT cells and suppression of responses to FG reversed stimuli. In our model FG segregation is achieved before shape discrimination, which is itself evidenced by the difference in spiking onsets of a pair of output cells. The studied example also includes feature extraction and illustrates a classification of binary images depending on the dominance of vertical or horizontal borders.
Resumo:
A Fortran77 program, SSPBE, designed to solve the spherically symmetric Poisson-Boltzmann equation using cell model for ionic macromolecular aggregates or macroions is presented. The program includes an adsorption model for ions at the aggregate surface. The working algorithm solves the Poisson-Boltzmann equation in the integral representation using the Picard iteration method. Input parameters are introduced via an ASCII file, sspbe.txt. Output files yield the radial distances versus mean field potentials and average molar ion concentrations, the molar concentration of ions at the cell boundary, the self-consistent degree of ion adsorption from the surface and other related data. Ion binding to ionic, zwitterionic and reverse micelles are presented as representative examples of the applications of the SSPBE program.
Resumo:
We examine the scale invariants in the preparation of highly concentrated w/o emulsions at different scales and in varying conditions. The emulsions are characterized using rheological parameters, owing to their highly elastic behavior. We first construct and validate empirical models to describe the rheological properties. These models yield a reasonable prediction of experimental data. We then build an empirical scale-up model, to predict the preparation and composition conditions that have to be kept constant at each scale to prepare the same emulsion. For this purpose, three preparation scales with geometric similarity are used. The parameter N¿D^α, as a function of the stirring rate N, the scale (D, impeller diameter) and the exponent α (calculated empirically from the regression of all the experiments in the three scales), is defined as the scale invariant that needs to be optimized, once the dispersed phase of the emulsion, the surfactant concentration, and the dispersed phase addition time are set. As far as we know, no other study has obtained a scale invariant factor N¿Dα for the preparation of highly concentrated emulsions prepared at three different scales, which covers all three scales, different addition times and surfactant concentrations. The power law exponent obtained seems to indicate that the scale-up criterion for this system is the power input per unit volume (P/V).
Resumo:
We've developed a new ambient occlusion technique based on an information-theoretic framework. Essentially, our method computes a weighted visibility from each object polygon to all viewpoints; we then use these visibility values to obtain the information associated with each polygon. So, just as a viewpoint has information about the model's polygons, the polygons gather information on the viewpoints. We therefore have two measures associated with an information channel defined by the set of viewpoints as input and the object's polygons as output, or vice versa. From this polygonal information, we obtain an occlusion map that serves as a classic ambient occlusion technique. Our approach also offers additional applications, including an importance-based viewpoint-selection guide, and a means of enhancing object features and producing nonphotorealistic object visualizations