906 resultados para R15 - Econometric and Input Output Models
Resumo:
El objetivo de este estudio es analizar el impacto, en emisiones de CO2, de la demanda final de Cataluña en relación a los vínculos comerciales interregionales con el resto de España y el resto del mundo. Este proceso implica el análisis del balance en CO2 incorporado para Cataluña, lo que permitirá evaluar la responsabilidad de la economía catalana respecto a estas emisiones. Para este propósito se construye, para esta determinada desagregación regional, un modelo Multi-Regional Input-Output (MRIO) extendido al medioambiente con sectores verticalmente integrados. La incorporación de la técnica de la integración vertical nos permite un enfoque alternativo para el Balance Neto y un análisis más detallado de los vínculos interregionales entre los diversos sectores productivos, centrado en la responsabilidad última de la demanda final de cada sector en cada región. Hasta el momento, los estudios previos sobre los impactos medioambientales incorporados al comercio español se han centrado principalmente en el ámbito nacional. No obstante, por un lado el comercio interregional con el resto de España en términos monetarios representa cerca de la mitad del comercio exterior catalán. Por otro lado, los distintos metabolismos energéticos de ambas economías tienen como consecuencia una importante diferencia en la intensidad de emisión en la producción de bienes y servicios. Esta situación genera para Cataluña un déficit en el Balance Neto estimado con el resto de España, aún teniendo un importante superávit monetario. De esto se desprende la importancia de integrar el nivel interregional en los estudios de los impactos medioambientales incorporados en el comercio y, en consecuencia, en la planificación y formalización de políticas económicas y ambientales a nivel nacional.
Resumo:
Estudi realitzat a partir d’una estada a la Stanford University School of Medicine. Division of Radiation Oncology, Estats Units, entre 2010 i 2012. Durant els dos anys de beca postdoctoral he estat treballant en dos projectes diferents. En primer lloc, i com a continuació d'estudis previs del grup, volíem estudiar la causa de les diferències en nivells d'hipòxia que havíem observat en models de càncer de pulmó. La nostra hipòtesi es basava en el fet que aquestes diferències es devien a la funcionalitat de la vasculatura. Vam utilitzar dos models preclínics: un en què els tumors es formaven espontàniament als pulmons i l'altre on nosaltres injectàvem les cèl•lules de manera subcutània. Vam utilitzar tècniques com la ressonància magnètica dinàmica amb agent de contrast (DCE-MRI) i l'assaig de perfusió amb el Hoeschst 33342 i ambdues van demostrar que la funcionalitat de la vasculatura dels tumors espontanis era molt més elevada comparada amb la dels tumors subcutanis. D'aquest estudi, en podem concloure que les diferències en els nivells d'hipòxia en els diferents models tumorals de càncer de pulmó podrien ser deguts a la variació en la formació i funcionalitat de la vasculatura. Per tant, la selecció de models preclínics és essencial, tant pels estudi d'hipòxia i angiogènesi, com per a teràpies adreçades a aquests fenòmens. L'altre projecte que he estat desenvolupant es basa en l'estudi de la radioteràpia i els seus possibles efectes a l’hora de potenciar l'autoregeneració del tumor a partir de les cèl•lules tumorals circulants (CTC). Aquest efecte s'ha descrit en alguns models tumorals preclínics. Per tal de dur a terme els nostres estudis, vam utilitzar una línia tumoral de càncer de mama de ratolí, marcada permanentment amb el gen de Photinus pyralis o sense marcar i vam fer estudis in vitro i in vivo. Ambdós estudis han demostrat que la radiació tumoral promou la invasió cel•lular i l'autoregeneració del tumor per CTC. Aquest descobriment s'ha de considerar dins d'un context de radioteràpia clínica per tal d'aconseguir el millor tractament en pacients amb nivells de CTC elevats.
Resumo:
The recent wave of upheavals and revolts in Northern Africa and the Middle East goes back to an old question often raised by theories of collective action: does repression act as a negative or positive incentive for further mobilization? Through a review of the vast literature devoted to this question, this article aims to go beyond theoretical and methodological dead-ends. The article moves on to non-Western settings in order to better understand, via a macro-sociological and dynamic approach, the causal effects between mobilizations and repression. It pleads for a meso- and micro-level approach to this issue: an approach that puts analytical emphasis both on protest organizations and on individual activists' careers.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
Two different approaches currently prevail for predicting spatial patterns of species assemblages. The first approach (macroecological modelling, MEM) focuses directly on realised properties of species assemblages, whereas the second approach (stacked species distribution modelling, S-SDM) starts with constituent species to approximate assemblage properties. Here, we propose to unify the two approaches in a single 'spatially-explicit species assemblage modelling' (SESAM) framework. This framework uses relevant species source pool designations, macroecological factors, and ecological assembly rules to constrain predictions of the richness and composition of species assemblages obtained by stacking predictions of individual species distributions. We believe that such a framework could prove useful in many theoretical and applied disciplines of ecology and evolution, both for improving our basic understanding of species assembly across spatio-temporal scales and for anticipating expected consequences of local, regional or global environmental changes. In this paper, we propose such a framework and call for further developments and testing across a broad range of community types in a variety of environments.
Resumo:
This paper studies the short run correlation of inflation and money growth. We study whether a model of learning can do better than a model of rational expectations, we focus our study on countries of high inflation. We take the money process as an exogenous variable, estimated from the data through a switching regime process. We findthat the rational expectations model and the model of learning both offer very good explanations for the joint behavior of money and prices.
Resumo:
In recent years, both homing endonucleases (HEases) and zinc-finger nucleases (ZFNs) have been engineered and selected for the targeting of desired human loci for gene therapy. However, enzyme engineering is lengthy and expensive and the off-target effect of the manufactured endonucleases is difficult to predict. Moreover, enzymes selected to cleave a human DNA locus may not cleave the homologous locus in the genome of animal models because of sequence divergence, thus hampering attempts to assess the in vivo efficacy and safety of any engineered enzyme prior to its application in human trials. Here, we show that naturally occurring HEases can be found, that cleave desirable human targets. Some of these enzymes are also shown to cleave the homologous sequence in the genome of animal models. In addition, the distribution of off-target effects may be more predictable for native HEases. Based on our experimental observations, we present the HomeBase algorithm, database and web server that allow a high-throughput computational search and assignment of HEases for the targeting of specific loci in the human and other genomes. We validate experimentally the predicted target specificity of candidate fungal, bacterial and archaeal HEases using cell free, yeast and archaeal assays.
Resumo:
Selostus: Rehun valkuais- ja energiapitoisuuden vaikutus sikojen typen hyväksikäyttöön, veden kulutukseen ja virtsan eritykseen
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.