906 resultados para R15 - Econometric and Input Output Models
Resumo:
Part of the local economic impact of a major sporting event comes from the associated temporary tourism expenditures. Typically demand-driven Input-Output (IO) methods are used to quantify the impacts of such expenditures. However, IO modelling has specific weaknesses when measuring temporary tourism impacts; particular problems lie in its treatment of factor supplies and its lack of dynamics. Recent work argues that Computable General Equilibrium (CGE) analysis is more appropriate and this has been widely applied. Neglected in this literature however is an understanding of the role that behavioural characteristics and factor supply assumptions play in determining the economic impact of tourist expenditures, particularly where expenditures are temporary (i.e. of limited duration) and anticipated (i.e. known in advance). This paper uses a CGE model for Scotland in which agents can have myopic- or forward-looking behaviours and shows how these alternative specifications affect the timing and scale of the economic impacts from anticipated and temporary tourism expenditure. The tourism shock analysed is of a scale expected for the Commonwealth Games to be held in Glasgow in 2014. The model shows how “pre-shock” and “legacy” effects – impacts before and after the shock – arise and their quantitative importance. Using the forward-looking model the paper calculates the optimal degree of pre-announcement.
Resumo:
In this paper we consider extensions of smooth transition autoregressive (STAR) models to situations where the threshold is a time-varying function of variables that affect the separation of regimes of the time series under consideration. Our specification is motivated by the observation that unusually high/low values for an economic variable may sometimes be best thought of in relative terms. State-dependent logistic STAR and contemporaneous-threshold STAR models are introduced and discussed. These models are also used to investigate the dynamics of U.S. short-term interest rates, where the threshold is allowed to be a function of past output growth and inflation.
Resumo:
Network airlines have been increasingly focusing their operations on hub airports through the exploitation of connecting traffic, allowing them to take advantage of economies of traffic density, which are unequivocal in the airline industry. Less attention has been devoted to airlines? decisions on point-to-point thin routes, which could be served using different aircraft technologies and different business models. This paper examines, both theoretically and empirically, the impact on airlines ?networks of the two major innovations in the airline industry in the last two decades: the regional jet technology and the low-cost business model. We show that, under certain circumstances, direct services on point-to-point thin routes can be viable and thus airlines may be interested in deviating passengers out of the hub.
Resumo:
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as Fokker-Planck-Kolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blow-up issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blow-up always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
Network airlines have been increasingly focusing their operations on hub airports through the exploitation of connecting traffic, allowing them to take advantage of economies of traffic density, which are unequivocal in the airline industry. Less attention has been devoted to airlines' decisions on point-to-point thin routes, which could be served using different aircraft technologies and different business models. This paper examines, both theoretically and empirically, the impact on airlines' networks of the two major innovations in the airline industry in the last two decades: the regional jet technology and the low-cost business model. We show that, under certain circumstances, direct services on point-to-point thin routes can be viable and thus airlines may be interested in deviating passengers out of the hub. Keywords: regional jet technology; low-cost business model; point-to-point network; hub-and-spoke network JEL Classi…fication Numbers: L13; L2; L93
Resumo:
In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.
Resumo:
The main aim of this work is to define an environmental tax on products and services based on their carbon footprint. We examine the relevance of conventional life cycle analysis (LCA) and environmentally extended input-output analysis (EIO) as methodological tools to identify emission intensities of products and services on which the tax is based. The short-term price effects of the tax and the policy implications of considering non-GHG are also analyzed. The results from the specific case study on pulp production show that the environmental tax rate based on the LCA approach (1,8%) is higher than both EIO approaches (0,8% for product and 1,4% for industry approach), but they are comparable. Even though LCA is more product specific and provides detailed analysis, EIO would be the more relevant approach to apply economy wide environmental tax. When the environmental tax considers non-GHG emissions instead of only CO2, sectors such as agriculture, mining of coal and extraction of peat, and food exhibit higher environmental tax and price effects. Therefore, it is worthwhile for policy makers to pay attention on the implication of considering only CO2 tax or GHG emissions tax in order for such a policy measure to be effective and meaningful. Keywords: Environmental tax; Life cycle analysis; Environmental input-output analysis.
Resumo:
The two main alternative methods used to identify key sectors within the input-output approach, the Classical Multiplier method (CMM) and the Hypothetical Extraction method (HEM), are formally and empirically compared in this paper. Our findings indicate that the main distinction between the two approaches stems from the role of the internal effects. These internal effects are quantified under the CMM while under the HEM only external impacts are considered. In our comparison, we find, however that CMM backward measures are more influenced by within-block effects than the proposed forward indices under this approach. The conclusions of this comparison allow us to develop a hybrid proposal that combines these two existing approaches. This hybrid model has the advantage of making it possible to distinguish and disaggregate external effects from those that a purely internal. This proposal has also an additional interest in terms of policy implications. Indeed, the hybrid approach may provide useful information for the design of ''second best'' stimulus policies that aim at a more balanced perspective between overall economy-wide impacts and their sectoral distribution.
Resumo:
BACKGROUND: We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. METHODS: Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i) linear regression; (ii) logistic classification; (iii) regression trees; (iv) classification trees (iii and iv are collectively known as "CART"). Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. RESULTS: Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60-80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. CONCLUSIONS: There were no striking differences between either the algebraic (i, ii) vs. non-algebraic (iii, iv), or the regression (i, iii) vs. classification (ii, iv) modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.
Resumo:
Machado-Joseph disease or spinocerebellar ataxia type 3, the most common dominantly-inherited spinocerebellar ataxia, results from translation of the polyglutamine-expanded and aggregation prone ataxin 3 protein. Clinical manifestations include cerebellar ataxia and pyramidal signs and there is no therapy to delay disease progression. Beclin 1, an autophagy-related protein and essential gene for cell survival, is decreased in several neurodegenerative disorders. This study aimed at evaluating if lentiviral-mediated beclin 1 overexpression would rescue motor and neuropathological impairments when administered to pre- and post-symptomatic lentiviral-based and transgenic mouse models of Machado-Joseph disease. Beclin 1-mediated significant improvements in motor coordination, balance and gait with beclin 1-treated mice equilibrating longer periods in the Rotarod and presenting longer and narrower footprints. Furthermore, in agreement with the improvements observed in motor function beclin 1 overexpression prevented neuronal dysfunction and neurodegeneration, decreasing formation of polyglutamine-expanded aggregates, preserving Purkinje cell arborization and immunoreactivity for neuronal markers. These data show that overexpression of beclin 1 in the mouse cerebellum is able to rescue and hinder the progression of motor deficits when administered to pre- and post-symptomatic stages of the disease.
Resumo:
Aquest estudi pretén investigar els intercanvis verbals mestre/a – aprenent(s) en dos contextos d'instrucció diferents: classes amb un enfocament AICLE (Aprenentatge Integrat de Continguts Curriculars i Llengua Estrangera) on s’aprenen continguts no lingüístics a través de l’anglès, per una banda, i classes 'tradicionals' d'anglès com a llengua estrangera, on l’anglès és alhora objecte d’estudi i vehicle de comunicació, per una altra banda. Més concretament, les preguntes que formula el/la mestre/a, la producció oral dels aprenents i el 'feedback' del/de la mestre/a en els episodis d’atenció a la forma s’han estudiat a la llum de les principals teories provinents del camp de l’Adquisició de Segones Llengües (SLA) per tal de demostrar el seu paper en l’aprenentatge de l’anglès. El corpus de dades prové de l’enregistrament de 7 sessions AICLE i d'11 sessions EFL enregistrades en format àudio i vídeo en dos centres públics d’Educació Primària (EP) de Catalunya. A cadascuna de les escoles, el/la mateix/a mestre/a és l’encarregat/da dels dos tipus d’instrucció amb el mateix grup d’aprenents (10-11 anys d’edat), fet que permet eliminar variables individuals com l'aptitud dels aprenents o l'estil del/de la mestre/a.Els resultats mostren un cert nombre de similituds discursives entre AICLE i EFL donat que ambdós enfocaments tenen lloc en el context-classe amb unes característiques ben definides. Tal com apunta la recerca realitzada en aquest camp, la instrucció AICLE reuneix un seguit de condicions idònies per un major desenvolupament dels nivells de llengua anglesa més enllà de les classes ‘tradicionals’ d’anglès. Malgrat això, aquest estudi sembla indicar que el potencial d'AICLE pel que fa a facilitar una exposició rica a l’anglès i una producció oral significativa no s’explota degudament. En aquest sentit, els resultats d’aquest estudi poden contribuir a la formació dels futurs professors d'AICLE si es busca l’assoliment d’una complementarietat d’ambdós contextos amb l’objectiu últim de millorar els nivells de domini de la llengua anglesa.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
This paper identifies the key sectors in greenhouse gas emissions of the Uruguayan economy through input-output analysis. This allows to precisely determine the role played by the different productive sectors and their relationship with other sectors in the relation between the Uruguayan productive structure and atmospheric pollution. In order to guide policy design for GHG reduction, we decompose sectors liability between the pollution generated through their own production processes and the pollution indirectly generated in the production processes of other sectors. The results show that all the key polluting sectors for the different contaminants considered are relevant because of their own emissions, except for the sector Motor vehicles and oil retail trade, which is relevant in CO2 emissions because of its pure, both backward and forward, linkages. Finally, the best policy channels for controlling and reducing GHGs emissions are identified, and compared with the National Climate Change Response Plan (NCCRP) lines of action.
Resumo:
Population viability analyses (PVA) are increasingly used in metapopulation conservation plans. Two major types of models are commonly used to assess vulnerability and to rank management options: population-based stochastic simulation models (PSM such as RAMAS or VORTEX) and stochastic patch occupancy models (SPOM). While the first set of models relies on explicit intrapatch dynamics and interpatch dispersal to predict population levels in space and time, the latter is based on spatially explicit metapopulation theory where the probability of patch occupation is predicted given the patch area and isolation (patch topology). We applied both approaches to a European tree frog (Hyla arborea) metapopulation in western Switzerland in order to evaluate the concordances of both models and their applications to conservation. Although some quantitative discrepancies appeared in terms of network occupancy and equilibrium population size, the two approaches were largely concordant regarding the ranking of patch values and sensitivities to parameters, which is encouraging given the differences in the underlying paradigms and input data.