991 resultados para Macro Modelling
Resumo:
This paper evaluates new evidence on price setting practices and inflation persistence in the euro area with respect to its implications for macro modelling. It argues that several of the most commonly used assumptions in micro-founded macro models are seriously challenged by the new findings.
Numerical Assessment of the out-of-plane response of a brick masonry structure without box behaviour
Resumo:
This paper presents the assessment of the out-of-plane response due to seismic loading of a masonry structure without rigid diaphragm. This structure corresponds to real scale brick masonry specimen with a main façade connected to two return walls. Two modelling approaches were defined for this evaluation. The first one consisted on macro modelling, whereas the second one on simplified micro modelling. As a first step of this study, static nonlinear analyses were conducted to the macro model aiming at evaluating the out-of-plane response and failure mechanism of the masonry structure. A sensibility analyses was performed in order to assess the mesh size and material model dependency. In addition, the macro models were subjected to dynamic nonlinear analyses with time integration in order to assess the collapse mechanism. Finally, these analyses were also applied to a simplified micro model of the masonry structure. Furthermore, these results were compared to experimental response from shaking table tests. It was observed that these numerical techniques simulate correctly the in-plane behaviour of masonry structures. However, the
Resumo:
Specific marine macro algae species abundant at the Portuguese coast (Laminaria hyperborea, Bifurcaria bifurcata, Sargassum muticum and Fucus spiralis) were shown to be effective for removing toxic metals (Cd(II), Zn(II) and Pb(II)) from aqueous solutions. The initial metal concentrations in solution were about 75–100 mg L−1. The observed biosorption capacities for cadmium, zinc and lead ions were in the ranges of 23.9–39.5, 18.6–32.0 and 32.3–50.4 mg g−1, respectively. Kinetic studies revealed that the metal uptake rate was rather fast, with 75% of the total amount occurring in the first 10 min for all algal species. Experimental data were well fitted by a pseudo-second order rate equation. The contribution of internal diffusion mechanism was significant only to the initial biosorption stage. Results indicate that all the studied macro algae species can provide an efficient and cost-effective technology for eliminating heavy metals from industrial effluents.
Resumo:
The work done in this thesis attempts to demonstrate the importance of using models that can predict and represent the mobility of our society. To answer the proposed challenges two models were examined, the first corresponds to macro simulation with the intention of finding a solution to the frequency of the bus company Horários do Funchal, responsible for transport in the city of Funchal, and some surrounding areas. Where based on a simplified model of the city it was possible to increase the frequency of journeys getting an overall reduction in costs. The second model concerns the micro simulation of Avenida do Mar, where currently is being built a new roundabout (Praça da Autonomia), which connects with this avenue. Therefore it was proposed to study the impact on local traffic, and the implementation of new traffic lights for this purpose. Four possible situations in which was seen the possibility of increasing the number of lanes on the roundabout or the insertion of a bus lane were created. The results showed that having a roundabout with three lanes running is the best option because the waiting queues are minimal, and at environmental level this model will project fewer pollutants. Thus, this thesis presents two possible methods of urban planning. Transport modelling is an area that is under constant development, the global goal is to encourage more and more the use of these models, and as such it is important to have more people to devote themselves to studying new ways of addressing current problems, so that we can have more accurate models and increasing their credibility.
Resumo:
Inspired in dynamic systems theory and Brewer’s contributions to apply it to economics, this paper establishes a bond graph model. Two main variables, a set of inter-connectivities based on nodes and links (bonds) and a fractional order dynamical perspective, prove to be a good macro-economic representation of countries’ potential performance in nowadays globalization. The estimations based on time series for 50 countries throughout the last 50 decades confirm the accuracy of the model and the importance of scale for economic performance.
Resumo:
Transport is an essential sector in modern societies. It connects economic sectors and industries. Next to its contribution to economic development and social interconnection, it also causes adverse impacts on the environment and results in health hazards. Transport is a major source of ground air pollution, especially in urban areas, and therefore contributing to the health problems, such as cardiovascular and respiratory diseases, cancer, and physical injuries. This thesis presents the results of a health risk assessment that quantifies the mortality and the diseases associated with particulate matter pollution resulting from urban road transport in Hai Phong City, Vietnam. The focus is on the integration of modelling and GIS approaches in the exposure analysis to increase the accuracy of the assessment and to produce timely and consistent assessment results. The modelling was done to estimate traffic conditions and concentrations of particulate matters based on geo-references data. A simplified health risk assessment was also done for Ha Noi based on monitoring data that allows a comparison of the results between the two cases. The results of the case studies show that health risk assessment based on modelling data can provide a much more detail results and allows assessing health impacts of different mobility development options at micro level. The use of modeling and GIS as a common platform for the integration of different assessments (environmental, health, socio-economic, etc.) provides various strengths, especially in capitalising on the available data stored in different units and forms and allows handling large amount of data. The use of models and GIS in a health risk assessment, from a decision making point of view, can reduce the processing/waiting time while providing a view at different scales: from micro scale (sections of a city) to a macro scale. It also helps visualising the links between air quality and health outcomes which is useful discussing different development options. However, a number of improvements can be made to further advance the integration. An improved integration programme of the data will facilitate the application of integrated models in policy-making. Data on mobility survey, environmental monitoring and measuring must be standardised and legalised. Various traffic models, together with emission and dispersion models, should be tested and more attention should be given to their uncertainty and sensitivity
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
Les questions abordées dans les deux premiers articles de ma thèse cherchent à comprendre les facteurs économiques qui affectent la structure à terme des taux d'intérêt et la prime de risque. Je construis des modèles non linéaires d'équilibre général en y intégrant des obligations de différentes échéances. Spécifiquement, le premier article a pour objectif de comprendre la relation entre les facteurs macroéconomiques et le niveau de prime de risque dans un cadre Néo-keynésien d'équilibre général avec incertitude. L'incertitude dans le modèle provient de trois sources : les chocs de productivité, les chocs monétaires et les chocs de préférences. Le modèle comporte deux types de rigidités réelles à savoir la formation des habitudes dans les préférences et les coûts d'ajustement du stock de capital. Le modèle est résolu par la méthode des perturbations à l'ordre deux et calibré à l'économie américaine. Puisque la prime de risque est par nature une compensation pour le risque, l'approximation d'ordre deux implique que la prime de risque est une combinaison linéaire des volatilités des trois chocs. Les résultats montrent qu'avec les paramètres calibrés, les chocs réels (productivité et préférences) jouent un rôle plus important dans la détermination du niveau de la prime de risque relativement aux chocs monétaires. Je montre que contrairement aux travaux précédents (dans lesquels le capital de production est fixe), l'effet du paramètre de la formation des habitudes sur la prime de risque dépend du degré des coûts d'ajustement du capital. Lorsque les coûts d'ajustement du capital sont élevés au point que le stock de capital est fixe à l'équilibre, une augmentation du paramètre de formation des habitudes entraine une augmentation de la prime de risque. Par contre, lorsque les agents peuvent librement ajuster le stock de capital sans coûts, l'effet du paramètre de la formation des habitudes sur la prime de risque est négligeable. Ce résultat s'explique par le fait que lorsque le stock de capital peut être ajusté sans coûts, cela ouvre un canal additionnel de lissage de consommation pour les agents. Par conséquent, l'effet de la formation des habitudes sur la prime de risque est amoindri. En outre, les résultats montrent que la façon dont la banque centrale conduit sa politique monétaire a un effet sur la prime de risque. Plus la banque centrale est agressive vis-à-vis de l'inflation, plus la prime de risque diminue et vice versa. Cela est due au fait que lorsque la banque centrale combat l'inflation cela entraine une baisse de la variance de l'inflation. Par suite, la prime de risque due au risque d'inflation diminue. Dans le deuxième article, je fais une extension du premier article en utilisant des préférences récursives de type Epstein -- Zin et en permettant aux volatilités conditionnelles des chocs de varier avec le temps. L'emploi de ce cadre est motivé par deux raisons. D'abord des études récentes (Doh, 2010, Rudebusch and Swanson, 2012) ont montré que ces préférences sont appropriées pour l'analyse du prix des actifs dans les modèles d'équilibre général. Ensuite, l'hétéroscedasticité est une caractéristique courante des données économiques et financières. Cela implique que contrairement au premier article, l'incertitude varie dans le temps. Le cadre dans cet article est donc plus général et plus réaliste que celui du premier article. L'objectif principal de cet article est d'examiner l'impact des chocs de volatilités conditionnelles sur le niveau et la dynamique des taux d'intérêt et de la prime de risque. Puisque la prime de risque est constante a l'approximation d'ordre deux, le modèle est résolu par la méthode des perturbations avec une approximation d'ordre trois. Ainsi on obtient une prime de risque qui varie dans le temps. L'avantage d'introduire des chocs de volatilités conditionnelles est que cela induit des variables d'état supplémentaires qui apportent une contribution additionnelle à la dynamique de la prime de risque. Je montre que l'approximation d'ordre trois implique que les primes de risque ont une représentation de type ARCH-M (Autoregressive Conditional Heteroscedasticty in Mean) comme celui introduit par Engle, Lilien et Robins (1987). La différence est que dans ce modèle les paramètres sont structurels et les volatilités sont des volatilités conditionnelles de chocs économiques et non celles des variables elles-mêmes. J'estime les paramètres du modèle par la méthode des moments simulés (SMM) en utilisant des données de l'économie américaine. Les résultats de l'estimation montrent qu'il y a une évidence de volatilité stochastique dans les trois chocs. De plus, la contribution des volatilités conditionnelles des chocs au niveau et à la dynamique de la prime de risque est significative. En particulier, les effets des volatilités conditionnelles des chocs de productivité et de préférences sont significatifs. La volatilité conditionnelle du choc de productivité contribue positivement aux moyennes et aux écart-types des primes de risque. Ces contributions varient avec la maturité des bonds. La volatilité conditionnelle du choc de préférences quant à elle contribue négativement aux moyennes et positivement aux variances des primes de risque. Quant au choc de volatilité de la politique monétaire, son impact sur les primes de risque est négligeable. Le troisième article (coécrit avec Eric Schaling, Alain Kabundi, révisé et resoumis au journal of Economic Modelling) traite de l'hétérogénéité dans la formation des attentes d'inflation de divers groupes économiques et de leur impact sur la politique monétaire en Afrique du sud. La question principale est d'examiner si différents groupes d'agents économiques forment leurs attentes d'inflation de la même façon et s'ils perçoivent de la même façon la politique monétaire de la banque centrale (South African Reserve Bank). Ainsi on spécifie un modèle de prédiction d'inflation qui nous permet de tester l'arrimage des attentes d'inflation à la bande d'inflation cible (3% - 6%) de la banque centrale. Les données utilisées sont des données d'enquête réalisée par la banque centrale auprès de trois groupes d'agents : les analystes financiers, les firmes et les syndicats. On exploite donc la structure de panel des données pour tester l'hétérogénéité dans les attentes d'inflation et déduire leur perception de la politique monétaire. Les résultats montrent qu'il y a évidence d'hétérogénéité dans la manière dont les différents groupes forment leurs attentes. Les attentes des analystes financiers sont arrimées à la bande d'inflation cible alors que celles des firmes et des syndicats ne sont pas arrimées. En effet, les firmes et les syndicats accordent un poids significatif à l'inflation retardée d'une période et leurs prédictions varient avec l'inflation réalisée (retardée). Ce qui dénote un manque de crédibilité parfaite de la banque centrale au vu de ces agents.
Resumo:
The current study is aimed at the development of a theoretical simulation tool based on Discrete Element Method (DEM) to 'interpret granular dynamics of solid bed in the cross section of the horizontal rotating cylinder at the microscopic level and subsequently apply this model to establish the transition behaviour, mixing and segregation.The simulation of the granular motion developed in this work is based on solving Newton's equation of motion for each particle in the granular bed subjected to the collisional forces, external forces and boundary forces. At every instant of time, the forces are tracked and the positions velocities and accelarations of each partcle is The software code for this simulation is written in VISUAL FORTRAN 90 After checking the validity of the code with special tests, it is used to investigate the transition behaviour of granular solids motion in the cross section of a rotating cylinder for various rotational speeds and fill fraction.This work is hence directed towards a theoretical investigation based on Discrete Element Method (DEM) of the motion of granular solids in the radial direction of the horizontal cylinder to elucidate the relationship between the operating parameters of the rotating cylinder geometry and physical properties ofthe granular solid.The operating parameters of the rotating cylinder include the various rotational velocities of the cylinder and volumetric fill. The physical properties of the granular solids include particle sizes, densities, stiffness coefficients, and coefficient of friction Further the work highlights the fundamental basis for the important phenomena of the system namely; (i) the different modes of solids motion observed in a transverse crosssection of the rotating cylinder for various rotational speeds, (ii) the radial mixing of the granular solid in terms of active layer depth (iii) rate coefficient of mixing as well as the transition behaviour in terms of the bed turnover time and rotational speed and (iv) the segregation mechanisms resulting from differences in the size and density of particles.The transition behaviour involving its six different modes of motion of the granular solid bed is quantified in terms of Froude number and the results obtained are validated with experimental and theoretical results reported in the literature The transition from slumping to rolling mode is quantified using the bed turnover time and a linear relationship is established between the bed turn over time and the inverse of the rotational speed of the cylinder as predicted by Davidson et al. [2000]. The effect of the rotational speed, fill fraction and coefficient of friction on the dynamic angle of repose are presented and discussed. The variation of active layer depth with respect to fill fraction and rotational speed have been investigated. The results obtained through simulation are compared with the experimental results reported by Van Puyvelde et. at. [2000] and Ding et at. [2002].The theoretical model has been further extended, to study the rmxmg and segregation in the transverse direction for different particle sizes and their size ratios. The effect of fill fraction and rotational speed on the transverse mixing behaviour is presented in the form of a mixing index and mixing kinetics curve. The segregation pattern obtained by the simulation of the granular solid bed with respect to the rotational speed of the cylinder is presented both in graphical and numerical forms. The segregation behaviour of the granular solid bed with respect to particle size, density and volume fraction of particle size has been investigated. Several important macro parameters characterising segregation such as mixing index, percolation index and segregation index have been derived from the simulation tool based on first principles developed in this work.
Resumo:
La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).
Resumo:
Recent coordinated observations of interplanetary scintillation (IPS) from the EISCAT, MERLIN, and STELab, and stereoscopic white-light imaging from the two heliospheric imagers (HIs) onboard the twin STEREO spacecraft are significant to continuously track the propagation and evolution of solar eruptions throughout interplanetary space. In order to obtain a better understanding of the observational signatures in these two remote-sensing techniques, the magnetohydrodynamics of the macro-scale interplanetary disturbance and the radio-wave scattering of the micro-scale electron-density fluctuation are coupled and investigated using a newly constructed multi-scale numerical model. This model is then applied to a case of an interplanetary shock propagation within the ecliptic plane. The shock could be nearly invisible to an HI, once entering the Thomson-scattering sphere of the HI. The asymmetry in the optical images between the western and eastern HIs suggests the shock propagation off the Sun–Earth line. Meanwhile, an IPS signal, strongly dependent on the local electron density, is insensitive to the density cavity far downstream of the shock front. When this cavity (or the shock nose) is cut through by an IPS ray-path, a single speed component at the flank (or the nose) of the shock can be recorded; when an IPS ray-path penetrates the sheath between the shock nose and this cavity, two speed components at the sheath and flank can be detected. Moreover, once a shock front touches an IPS ray-path, the derived position and speed at the irregularity source of this IPS signal, together with an assumption of a radial and constant propagation of the shock, can be used to estimate the later appearance of the shock front in the elongation of the HI field of view. The results of synthetic measurements from forward modelling are helpful in inferring the in-situ properties of coronal mass ejection from real observational data via an inverse approach.
Resumo:
Government targets for CO2 reductions are being progressively tightened, the Climate Change Act set the UK target as an 80% reduction by 2050 on 1990 figures. The residential sector accounts for about 30% of emissions. This paper discusses current modelling techniques in the residential sector: principally top-down and bottom-up. Top-down models work on a macro-economic basis and can be used to consider large scale economic changes; bottom-up models are detail rich to model technological changes. Bottom-up models demonstrate what is technically possible. However, there are differences between the technical potential and what is likely given the limited economic rationality of the typical householder. This paper recommends research to better understand individuals’ behaviour. Such research needs to include actual choices, stated preferences and opinion research to allow a detailed understanding of the individual end user. This increased understanding can then be used in an agent based model (ABM). In an ABM, agents are used to model real world actors and can be given a rule set intended to emulate the actions and behaviours of real people. This can help in understanding how new technologies diffuse. In this way a degree of micro-economic realism can be added to domestic carbon modelling. Such a model should then be of use for both forward projections of CO2 and to analyse the cost effectiveness of various policy measures.
Resumo:
The Arctic Snow Microstructure Experiment (ASMEx) took place in Sodankylä, Finland in the winters of 2013-2014 and 2014-2015. Radiometric, macro-, and microstructure measurements were made under different experimental conditions of homogenous snow slabs, extracted from the natural seasonal taiga snowpack. Traditional and modern measurement techniques were used for snow macro- and microstructure observations. Radiometric measurements of the microwave emission of snow on reflector and absorber bases were made at frequencies 18.7, 21.0, 36.5, 89.0 and 150.0 GHz, for both horizontal and vertical polarizations. Two measurement configurations were used for radiometric measurements: a reflecting surface and an absorbing base beneath the snow slabs. Simulations of brightness temperatures using two microwave emission models, Helsinki University of Technology (HUT) snow emission model and Microwave Emission Model of Layered Snowpacks (MEMLS), were compared to observed brightness temperatures. RMSE and bias were calculated; with the RMSE and bias values being smallest upon an absorbing base at vertical polarization. Simulations overestimated the brightness temperatures on absorbing base cases at horizontal polarization. With the other experimental conditions, the biases were small; with the exception of the HUT model 36.5 GHz simulation, which produced an underestimation for the reflector base cases. This experiment provides a solid framework for future research on the extinction of microwave radiation inside snow.
Resumo:
Investigating preferential flow, including macropore flow, is crucial to predicting and preventing point sources of contamination in soil, for example in the vicinity of pumping wells. With a view to advancing groundwater protection, this study aimed (i) to quantify the strength of macropore flow in four representative natural grassland soils on the Swiss plateau, and (ii) to define the parameters that significantly control macropore flow in grassland soil. For each soil type we selected three measurement points on which three successive irrigation experiments were carried out, resulting in a total of 36 irrigations. The strength of macropore flow, parameterized as the cumulated water volume flowing from macropores at a depth of 1 m in response to an irrigation of 60 mm h−1 intensity and 1 h duration, was simulated using the dual-permeability MACRO model. The model calibration was based on the key soil parameters and fine measurements of water content at different depths. Modelling results indicate high performance of macropore flow in all investigated soil types except in gleysols. The volume of water that flowed from macropores and was hence expected to reach groundwater varied between 81% and 94% in brown soils, 59% and 67% in para-brown soils, 43% and 56% in acid brown soils, and 22% and 35% in gleysols. These results show that spreading pesticides and herbicides in pumping well protection zones poses a high risk of contamination and must be strictly prohibited. We also found that organic carbon content was not correlated with the strength of macropore flow, probably due to its very weak variation in our study, while saturated water content showed a negative correlation with macropore flow. The correlation between saturated hydraulic conductivity (Ks) and macropore flow was negative as well, but weak. Macropore flow appears to be controlled by the interaction between the bulk density of the uppermost topsoil layer (0–0.10 m) and the macroporosity of the soil below. This interaction also affects the variations in Ks and saturated water content. Further investigations are needed to better understand the combined effect of all these processes including the exchange between micropore and macropore domains.
Resumo:
A tanulmányban a Pénzügyminisztérium gazdaságpolitikai főosztálya és az MTA Közgazdaságtudományi Intézete által kifejlesztett középméretű negyedéves makrogazdasági modell segítségével elemezzük a magyar gazdaság legfontosabb mechanizmusait. A modellezés során követett alapelvek és a modell blokkjainak bemutatása után egy forgatókönyv-elemzés keretében vizsgáljuk a makrogazdasági és költségvetési folyamatokat befolyásoló főbb faktorok hatásait. A - tágan értelmezett - "bizonytalansági tényezőket" három csoportba soroljuk: megkülönböztetjük a külső környezet (például árfolyam) változását, a gazdasági szereplők viselkedésében rejlő bizonytalanságokat (például a bérigazodás sebességének vagy a fogyasztássimítás mértékének bizonytalanságát), valamint a gazdaságpolitikai lépéseket (például állami bérek emelését). Megmutatjuk, hogy e kockázatok makrokövetkezményei nem függetlenek egymástól, például egy árfolyamváltozás hatását befolyásolja a bérigazodás sebessége. ______ This paper analyses the most important mechanisms of the Hungarian economy using a medium-sized quarterly macroeconomic model developed jointly by the Economic Policy Department of the Ministry of Finance and the Institute of Economics of the Hungarian Academy of Sciences. After introducing the fundamental principles of modelling and the building blocks of the model investigated, within a scenario analysis, the authors present the effects of the main factors behind the macroeconomic and budgetary processes. The sources of uncertainty - defined in a broad sense - are categorized in three groups: change in the external environment (e.g. the exchange rate), uncertainties in the behav-iour of economic agents (e.g. in speed of wage adjustment or extent of consumption smoothing), and economic policy decisions (e.g. the increase in public sector wages). The macroeconomic consequences of these uncertainties are shown not to be independent of each other. For instance, the effects of an exchange rate shock are influenced by the speed of wage adjustment.