1000 resultados para multiplier model
Resumo:
In this paper we analyse the economic impact of a new museum (the Gaudí Centre) on the local economy of Reus, a city in the province of Tarragona (southern Catalonia). We use a Keynesian income multiplier model to evaluate the effects of this new cultural venue on local income. In our calculation of the economic impact we distinguish between two phases: the construction phase and the exploitation phase. Our results show the important income impact of this cultural investment on the local economy.
Resumo:
Perinteisiä taloushallintopalveluja tuottavilla tilitoimistoilla on omien palveluidensa hinnoittelu ollut yksi haastavimmista hallinnollisista tehtävistä. Hinnoittelutavat alalla vaihtelevat ja myös asiakkaan on ollut vaikea ymmärtää hinnan muodostumisen periaatteita. Strategisesti vääränlainen hinnoittelu on myös saattanut johtaa kannattamattomaan liiketoimintaan. Tämän tutkimuksen tavoitteena on kehittää taloushallintopalveluja tuottavalle tilitoimistolle ymmärrettävä ja tehokkaasti toimiva hinnoittelumalli kirjanpitopalveluiden hinnoitteluun. Hinnoittelumallin on tarkoitus toimia kustannuslaskennan, palveluiden hinnoittelun sekä resurssienhallinnan ja budjetoinnin välineenä tuotettaessa kirjanpitopalveluita sekä muita tilitoimistoyrityksen palvelutuotteita. Tutkimus toteutettiin tutkimalla esimerkkiyrityksen kirjanpidon laskutusta yhden tilikauden ajalta. Aineistoa analysoimalla voitiin luoda kerroinmalli, jonka avulla voidaan hinnoitella kirjanpitopalveluita. Aineistoa analysoitaessa havaittiin voimakasta hajontaa aineistossa esiintyvissä kertoimissa. Tilikauden aikaisesta laskutusaineistosta löydettiin myös selkeä kannattavuuteen vaikuttava asia, jonka euromääräistä vaikutusta voitiin arvioida kerroinmallin avulla.
Resumo:
In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990–2000) period. Results from both pricingerror and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other.
Resumo:
This chapter explores the trade-off between competing objectives of employment creation and climate policy commitments in Irish agriculture. A social accounting matrix (SAM) multiplier model is linked with a partial equilibrium agricultural sector model to simulate the impact of a number of GHG emission reduction scenarios, assuming these are achieved through a constraint on beef production. Limiting the size of the beef sector helps to reduce GHG emissions with a very limited impact on the value of agricultural income at the farm level. However, the SAM multiplier analysis shows that there would be significant employment losses in the wider economy. From a policy perspective, a pragmatic approach to GHG emissions reductions in the agriculture sector, which balances opportunities for economic growth in the sector with opportunities to reduce associated GHG emissions, may be required.
Resumo:
Paper delivered at the Western Regional Science Association Annual Conference, Sedona, Arizona, February, 2010.
Resumo:
Multiplier analysis based upon the information contained in Leontief's inverse is undoubtedly part of the core of the input-output methodology and numerous applications an extensions have been developed that exploit its informational content. Nonetheless there are some implicit theoretical assumptions whose implications have perhaps not been fully assessed. This is the case of the 'excess capacity' assumption. Because of this assumption resources are available as needed to adjust production to new equilibrium states. In real world applications, however, new resources are scarce and costly. Supply constraints kick in and hence resource allocation needs to take them into account to really assess the effect of government policies. Using a closed general equilibrium model that incorporates supply constraints, we perform some simple numerical exercises and proceed to derive a 'constrained' multiplier matrix that can be compared with the standard 'unrestricted' multiplier matrix. Results show that the effectiveness of expenditure policies hinges critically on whether or not supply constraints are considered.
Resumo:
The two main alternative methods used to identify key sectors within the input-output approach, the Classical Multiplier method (CMM) and the Hypothetical Extraction method (HEM), are formally and empirically compared in this paper. Our findings indicate that the main distinction between the two approaches stems from the role of the internal effects. These internal effects are quantified under the CMM while under the HEM only external impacts are considered. In our comparison, we find, however that CMM backward measures are more influenced by within-block effects than the proposed forward indices under this approach. The conclusions of this comparison allow us to develop a hybrid proposal that combines these two existing approaches. This hybrid model has the advantage of making it possible to distinguish and disaggregate external effects from those that a purely internal. This proposal has also an additional interest in terms of policy implications. Indeed, the hybrid approach may provide useful information for the design of ''second best'' stimulus policies that aim at a more balanced perspective between overall economy-wide impacts and their sectoral distribution.
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.
Resumo:
The European Organization for Nuclear Research (CERN) operates the largest particle collider in the world. This particle collider is called the Large Hadron Collider (LHC) and it will undergo a maintenance break sometime in 2017 or 2018. During the break, the particle detectors, which operate around the particle collider, will be serviced and upgraded. Following the improvement in performance of the particle collider, the requirements for the detector electronics will be more demanding. In particular, the high amount of radiation during the operation of the particle collider sets requirements for the electronics that are uncommon in commercial electronics. Electronics that are built to function in the challenging environment of the collider have been designed at CERN. In order to meet the future challenges of data transmission, a GigaBit Transceiver data transmission module and an E-Link data bus have been developed. The next generation of readout electronics is designed to benefit from these technologies. However, the current readout electronics chips are not compatible with these technologies. As a result, in addition to new Gas Electron Multiplier (GEM) detectors and other technology, a new compatible chip is developed to function within the GEMs for the Compact Muon Solenoid (CMS) project. In this thesis, the objective was to study a data transmission interface that will be located on the readout chip between the E-Link bus and the control logic of the chip. The function of the module is to handle data transmission between the chip and the E-Link. In the study, a model of the interface was implemented with the Verilog hardware description language. This process was simulated by using chip design software by Cadence. State machines and operating principles with alternative possibilities for implementation are introduced in the E-Link interface design procedure. The functionality of the designed logic is demonstrated in simulation results, in which the implemented model is proven to be suitable for its task. Finally, suggestions that should be considered for improving the design have been presented.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.
Resumo:
In this paper, a fuzzy Markov random field (FMRF) model is used to segment land-objects into free, grass, building, and road regions by fusing remotely, sensed LIDAR data and co-registered color bands, i.e. scanned aerial color (RGB) photo and near infra-red (NIR) photo. An FMRF model is defined as a Markov random field (MRF) model in a fuzzy domain. Three optimization algorithms in the FMRF model, i.e. Lagrange multiplier (LM), iterated conditional mode (ICM), and simulated annealing (SA), are compared with respect to the computational cost and segmentation accuracy. The results have shown that the FMRF model-based ICM algorithm balances the computational cost and segmentation accuracy in land-cover segmentation from LIDAR data and co-registered bands.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
Model misspecification affects the classical test statistics used to assess the fit of the Item Response Theory (IRT) models. Robust tests have been derived under model misspecification, as the Generalized Lagrange Multiplier and Hausman tests, but their use has not been largely explored in the IRT framework. In the first part of the thesis, we introduce the Generalized Lagrange Multiplier test to detect differential item response functioning in IRT models for binary data under model misspecification. By means of a simulation study and a real data analysis, we compare its performance with the classical Lagrange Multiplier test, computed using the Hessian and the cross-product matrix, and the Generalized Jackknife Score test. The power of these tests is computed empirically and asymptotically. The misspecifications considered are local dependence among items and non-normal distribution of the latent variable. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the performance of the tests deteriorates. None of the tests considered show an overall superior performance than the others. In the second part of the thesis, we extend the Generalized Hausman test to detect non-normality of the latent variable distribution. To build the test, we consider a seminonparametric-IRT model, that assumes a more flexible latent variable distribution. By means of a simulation study and two real applications, we compare the performance of the Generalized Hausman test with the M2 limited information goodness-of-fit test and the Likelihood-Ratio test. Additionally, the information criteria are computed. The Generalized Hausman test has a better performance than the Likelihood-Ratio test in terms of Type I error rates and the M2 test in terms of power. The performance of the Generalized Hausman test and the information criteria deteriorates when the sample size is small and with a few items.
Resumo:
Understanding the molecular mechanisms of oral carcinogenesis will yield important advances in diagnostics, prognostics, effective treatment, and outcome of oral cancer. Hence, in this study we have investigated the proteomic and peptidomic profiles by combining an orthotopic murine model of oral squamous cell carcinoma (OSCC), mass spectrometry-based proteomics and biological network analysis. Our results indicated the up-regulation of proteins involved in actin cytoskeleton organization and cell-cell junction assembly events and their expression was validated in human OSCC tissues. In addition, the functional relevance of talin-1 in OSCC adhesion, migration and invasion was demonstrated. Taken together, this study identified specific processes deregulated in oral cancer and provided novel refined OSCC-targeting molecules.