947 resultados para modeling and model calibration
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
The objective of this paper is to estimate a petrol consumption function for Spain and to evaluate the redistributive effects of petrol taxation. We use micro data from the Spanish Household Budget Survey of 1990/91 and model petrol consumption taking into account the effect that income changes may have on car ownership levels, as well as the differences that exist between expenditure and consumption. Our results show the importance that household structure, place of residence and income have on petrol consumption. We are able to compute income elasticities of petrol expenditure, both conditional and unconditional on the level of car ownership. Non-conditional elasticities, while always very close to unit values, are lower for higher income households and for those living in rural areas or small cities. When car ownership levels are taken into account, conditional elasticities are obtained that are around one half the value of the non- conditional ones, being fairly stable across income categories and city sizes. As regards the redistributive effects of petrol taxation, we observe that for the lowest income deciles the share of petrol expenditure increases with income, and thus the tax can be regarded as progressive. However, after a certain income level the tax proves to be regressive.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting model as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
The stylized facts suggest a negative relationship between tax progressivity and the skill premium from the early 1960s until the early 1990s, and a positive one thereafter. They also generally imply rising tax progressivity, except for the 1980s. In this paper, we ask whether optimal tax policy is consistent with these observations, taking into account the demographic and technological factors that have also affected the skill premium. To this end, we construct a dynamic general equilibrium model in which the skill premium and the progressivity of the tax system are endogenously determined, with the latter being optimally chosen by a benevolent government. We find that optimal policy delivers both a progressive tax system and model predictions which are generally consistent, except for the 1980s, with the stylized facts relating to the skill premium and progressivity. To capture the patterns in the data over the 1980s requires that we adopt a government policy which is biased towards the interests of skilled agents. Thus, in addition to demographic and technological factors, changes in the preferences of policy-makers appear to be a potentially important factor in determining the evolution of the observed skill premium.
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
Les piles de combustible permeten la transformació eficient de l’energia química de certs combustibles a energia elèctrica a través d’un procés electroquímic. De les diferents tecnologies de piles de combustible, les piles de combustible de tipus PEM són les més competitives i tenen una gran varietat d’aplicacions. No obstant, han de ser alimentades únicament per hidrogen. Per altra banda, l’etanol, un combustible interessant en el marc dels combustibles renovables, és una possible font d’hidrogen. Aquest treball estudia la reformació d’etanol per a l’obtenció d’hidrogen per a alimentar piles de combustible PEM. Només existeixen algunes publicacions que tractin l’obtenció d’hidrogen a partir d’etanol, i aquestes no inclouen l’estudi dinàmic del sistema. Els objectius del treball són el modelat i l’estudi dinàmic de reformadors d’etanol de baixa temperatura. Concretament, proposa un model dinàmic d’un reformador catalític d’etanol amb vapor basat en un catalitzador de cobalt. Aquesta reformació permet obtenir valors alts d’eficiència i valors òptims de monòxid de carboni que evitaran l’enverinament d’una la pila de combustible de tipus PEM. El model, no lineal, es basa en la cinètica obtinguda de diferents assaigs de laboratori. El reformador modelat opera en tres etapes: deshidrogenació d’etanol a acetaldehid i hidrogen, reformat amb vapor d’acetaldehid, i la reacció WGS (Water Gas Shift). El treball també estudia la sensibilitat i controlabilitat del sistema, caracteritzant així el sistema que caldrà controlar. L’anàlisi de controlabilitat es realitza sobre la resposta de dinàmica ràpida obtinguda del balanç de massa del reformador. El model no lineal és linealitzat amb la finalitat d’aplicar eines d’anàlisi com RGA, CN i MRI. El treball ofereix la informació necessària per a avaluar la possible implementació en un laboratori de piles de combustibles PEM alimentades per hidrogen provinent d’un reformador d’etanol.
Resumo:
Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.
Resumo:
La localització d'òrgans és un tòpic important en l'àmbit de la imatge mèdica per l'ajuda del tractament i diagnosi del càncer. Un exemple es pot trobar en la cal•libració de models farmacoquinètics. Aquesta pot ésser realitzada utilitzant un teixit de referència, on, per exemple en imatges de ressonància magnètica de pit, una correcta segmentació del múscul pectoral és necessària per a la detecció de signes de malignitat. Els mètodes de segmentació basat en atlas han estat altament avaluats en imatge de ressonància magnètica de cervell, obtenint resultats satisfactoris. En aquest projecte, en col•laboració amb el el Diagnostic Image Analysis Group de la Radboud University Nijmegen Medical Centre i la supervisió del Dr. N.Karssemeijer, es presenta la primera aproximació d'un mètode de segmentació basat en atlas per segmentar els diferents teixits visibles en imatges de ressonància magnètica (T1) del pit femení. L'atlas consisteix en 5 estructures (teixit greixòs, teixit dens, cor, pulmons i múscul pectoral) i ha estat utilitzat en un algorisme de segmentació Bayesià per tal de delinear les esmentades estructures. A més a més, s'ha dut a terme una comparació entre un mètode de registre global i un de local, utilitzats tant en la construcció de l'atlas com en la fase de segmentació, essent el primer el que ha presentat millors resultats en termes d'eficiència i precisió. Per a l'avaluació, s'ha dut a terme una comparació visual i numèrica entre les segmentacions obtingudes i les realitzades manualment pels experts col•laboradors. Pel que fa a la numèrica, s'ha emprat el coeficient de similitud de Dice ( mesura que dóna valors entre 0 i 1, on 0 significa no similitud i 1 similitud màxima) i s'ha obtingut una mitjana general de 0.8. Aquest resultat confirma la validesa del mètode presentat per a la segmentació d'imatges de ressonància magnètica del pit.
Resumo:
This paper is concerned with the modeling and analysis of quantum dissipation phenomena in the Schrödinger picture. More precisely, we do investigate in detail a dissipative, nonlinear Schrödinger equation somehow accounting for quantum Fokker–Planck effects, and how it is drastically reduced to a simpler logarithmic equation via a nonlinear gauge transformation in such a way that the physics underlying both problems keeps unaltered. From a mathematical viewpoint, this allows for a more achievable analysis regarding the local wellposedness of the initial–boundary value problem. This simplification requires the performance of the polar (modulus–argument) decomposition of the wavefunction, which is rigorously attained (for the first time to the best of our knowledge) under quite reasonable assumptions.
Resumo:
Very large subsidence, with up to 20 km thick sediment layers, is observed in the East Barents Sea basin. Subsidence started in early Paleozoic, accelerated in Permo-Triassic times, finished during the middle Cretaceous, and was followed by moderate uplift in Cenozoic times. The observed gravity signal suggests that the East Barents Sea is at present in isostatic balance and indicates that a mass excess is required in the lithosphere to produce the observed large subsidence. Several origins have been proposed for the mass excess. We use 1-D thermokinematic modeling and 2-D isostatic density models of continental lithosphere to evaluate these competing hypotheses. The crustal density in 2-D thermokinematic models resulting from pressure-, temperature-, and composition-dependent phase change models is computed along transects crossing the East Barents Sea. The results indicate the following. (1) Extension can only explain the observed subsidence provided that a 10 km thick serpentinized mantle lens beneath the basin center is present. We conclude that this is unlikely given that this highly serpentinized layer should be formed below a sedimentary basin with more than 10 km of sediments and crust at least 10 km thick. (2) Phase changes in a compositionally homogeneous crust do not provide enough mass excess to explain the present-day basin geometry. (3) Phase change induced densification of a preexisting lower crustal gabbroic body, interpreted as a mafic magmatic underplate, can explain the basin geometry and observed gravity anomalies. The following model is proposed for the formation of the East Barents Sea basin: (1) Devonian rifting and extension related magmatism resulted in moderate thinning of the crust and a mafic underplate below the central basin area explaining initial late Paleozoic subsidence. (2) East-west shortening during the Permian and Triassic resulted in densification of the previously emplaced mafic underplated body and enhanced subsidence dramatically, explaining the present-day deep basin geometry.
Resumo:
Toxicity of chemical pollutants in aquatic environments is often addressed by assays that inquire reproductive inhibition of test microorganisms, such as algae or bacteria. Those tests, however, assess growth of populations as a whole via macroscopic methods such as culture turbidity or colony-forming units. Here we use flow cytometry to interrogate the fate of individual cells in low-density populations of the bacterium Pseudomonas fluorescens SV3 exposed or not under oligotrophic conditions to a number of common pollutants, some of which derive from oil contamination. Cells were stained at regular time intervals during the exposure assay with fluorescent dyes that detect membrane injury (i.e., live-dead assay). Reduction of population growth rates was observed upon toxicant insult and depended on the type of toxicant. Modeling and cell staining indicate that population growth rate decrease is a combined effect of an increased number of injured cells that may or may not multiply, and live cells dividing at normal growth rates. The oligotrophic assay concept presented here could be a useful complement for existing biomarker assays in compliance with new regulations on chemical effect studies or, more specifically, for judging recovery after exposure to fluctuating toxicant conditions.