897 resultados para Election forecasting
Resumo:
Abstract Purpose The purpose of the study is to review recent studies published from 2007-2015 on tourism and hotel demand modeling and forecasting with a view to identifying the emerging topics and methods studied and to pointing future research directions in the field. Design/Methodology/approach Articles on tourism and hotel demand modeling and forecasting published in both science citation index (SCI) and social science citation index (SSCI) journals were identified and analyzed. Findings This review found that the studies focused on hotel demand are relatively less than those on tourism demand. It is also observed that more and more studies have moved away from the aggregate tourism demand analysis, while disaggregate markets and niche products have attracted increasing attention. Some studies have gone beyond neoclassical economic theory to seek additional explanations of the dynamics of tourism and hotel demand, such as environmental factors, tourist online behavior and consumer confidence indicators, among others. More sophisticated techniques such as nonlinear smooth transition regression, mixed-frequency modeling technique and nonparametric singular spectrum analysis have also been introduced to this research area. Research limitations/implications The main limitation of this review is that the articles included in this study only cover the English literature. Future review of this kind should also include articles published in other languages. The review provides a useful guide for researchers who are interested in future research on tourism and hotel demand modeling and forecasting. Practical implications This review provides important suggestions and recommendations for improving the efficiency of tourism and hospitality management practices. Originality/value The value of this review is that it identifies the current trends in tourism and hotel demand modeling and forecasting research and points out future research directions.
Resumo:
Gender, Power and Political Speech explores the influence of gender on political speech by analyzing the performances of three female party leaders who took part in televised debates during the 2015 UK General Election campaign. The analysis considers similarities and differences between the women and their male colleagues, as well as between the women themselves; it also discusses the way gender - and its relationship to language - was taken up as an issue in media coverage of the campaign.
Resumo:
Thee 2016 Austrian presidential election saw a run-o between the Green party candidate Alexander Van der Bellen and the Freedom Party of Austria’s (FPÖ) far-right candidate Norbert Hofer. This paper asks: How did voters of Hofer express their support on Facebook? It presents the results of a qualitative ideology analysis of 6755 comments about the presidential election posted on the Facebook pages of FPÖ leader Heinz-Christian Strache and FPÖ candidate Hofer. The results reveal insights into the contemporary political role of the online leadership ideology, online nationalism, new racism online, the friend/enemy-scheme online, and online militancy. Right-wing extremism 2.0 is a complex problem that stands in the context of contemporary crises and demagoguery.
Resumo:
Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored. This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.
Resumo:
The objective of the evaluation of the weather forecasting services used by the Iowa Department of Transportation is to ascertain the accuracy of the forecasts given to maintenance personnel and to determine whether the forecasts are useful in the decision-making process and whether the forecasts have potential for improving the level of service. The Iowa Department of Transportation has estimated the average cost of fighting a winter storm to be about $60,000 to $70,000 per hour. This final report is to provide an evaluation report describing the collection of weather data and information associated with the weather forecasting services provided to the Iowa Department of Transportation and its maintenance activities and to determine their impact in winter maintenance decision-making.
Resumo:
The meteorological and chemical transport model WRF-Chem was implemented to forecast PM10 concentrations over Poland. WRF-Chem version 3.5 was configured with three one-way nested domains using the GFS meteorological data and the TNO MACC II emissions. The 48 hour forecasts were run for each day of the winter and summer period of 2014 and there is only a small decrease in model performance for winter with respect to forecast lead time. The model in general captures the variability in observed PM10 concentrations for most of the stations. However, for some locations and specific episodes, the model performance is poor and the results cannot yet be used by official authorities. We argue that a higher resolution sector-based emission data will be helpful for this analysis in connection with a focus on planetary boundary layer processes in WRF-Chem and their impact on the initial distribution of emissions on both time and space.
Resumo:
Yield management helps hotels more profitably manage the capacity of their rooms. Hotels tend to have two types of business: transient and group. Yield management research and systems have been designed for transient business in which the group forecast is taken as a given. In this research, forecast data from approximately 90 hotels of a large North American hotel chain were used to determine the accuracy of group forecasts and to identify factors associated with accurate forecasts. Forecasts showed a positive bias and had a mean absolute percentage error (MAPE) of 40% at two months before arrival; 30% at one month before arrival; and 10-15% on the day of arrival. Larger hotels, hotels with a higher dependence on group business, and hotels that updated their forecasts frequently during the month before arrival had more accurate forecasts.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
The mobile networks market (focus of this work) strategy is based on the consolidation of the installed structure and the optimization of the already existent resources. The increasingly competition and aggression of this market requires, to the mobile operators, a continuous maintenance and update of the networks in order to obtain the minimum number of fails and provide the best experience for its subscribers. In this context, this dissertation presents a study aiming to assist the mobile operators improving future network modifications. In overview, this dissertation compares several forecasting methods (mostly based on time series analysis) capable of support mobile operators with their network planning. Moreover, it presents several network indicators about the more common bottlenecks.
Resumo:
Este estudio empírico compara la capacidad de los modelos Vectores auto-regresivos (VAR) sin restricciones para predecir la estructura temporal de las tasas de interés en Colombia -- Se comparan modelos VAR simples con modelos VAR aumentados con factores macroeconómicos y financieros colombianos y estadounidenses -- Encontramos que la inclusión de la información de los precios del petróleo, el riesgo de crédito de Colombia y un indicador internacional de la aversión al riesgo mejora la capacidad de predicción fuera de la muestra de los modelos VAR sin restricciones para vencimientos de corto plazo con frecuencia mensual -- Para vencimientos de mediano y largo plazo los modelos sin variables macroeconómicas presentan mejores pronósticos sugiriendo que las curvas de rendimiento de mediano y largo plazo ya incluyen toda la información significativa para pronosticarlos -- Este hallazgo tiene implicaciones importantes para los administradores de portafolios, participantes del mercado y responsables de las políticas
Resumo:
unpublished
Resumo:
Three types of forecasts of the total Australian production of macadamia nuts (t nut-in-shell) have been produced early each year since 2001. The first is a long-term forecast, based on the expected production from the tree census data held by the Australian Macadamia Society, suitably scaled up for missing data and assumed new plantings each year. These long-term forecasts range out to 10 years in the future, and form a basis for industry and market planning. Secondly, a statistical adjustment (termed the climate-adjusted forecast) is made annually for the coming crop. As the name suggests, climatic influences are the dominant factors in this adjustment process, however, other terms such as bienniality of bearing, prices and orchard aging are also incorporated. Thirdly, industry personnel are surveyed early each year, with their estimates integrated into a growers and pest-scouts forecast. Initially conducted on a 'whole-country' basis, these models are now constructed separately for the six main production regions of Australia, with these being combined for national totals. Ensembles or suites of step-forward regression models using biologically-relevant variables have been the major statistical method adopted, however, developing methodologies such as nearest-neighbour techniques, general additive models and random forests are continually being evaluated in parallel. The overall error rates average 14% for the climate forecasts, and 12% for the growers' forecasts. These compare with 7.8% for USDA almond forecasts (based on extensive early-crop sampling) and 6.8% for coconut forecasts in Sri Lanka. However, our somewhatdisappointing results were mainly due to a series of poor crops attributed to human reasons, which have now been factored into the models. Notably, the 2012 and 2013 forecasts averaged 7.8 and 4.9% errors, respectively. Future models should also show continuing improvement, as more data-years become available.