876 resultados para patronage forecasting
Resumo:
The aim of this work was the assessment about the structure and use of the conceptual model of occlusion in operational weather forecasting. In the beginning a survey has been made about the conceptual model of occlusion as introduced to operational forecasters in the Finnish Meteorological Institute (FMI). In the same context an overview has been performed about the use of the conceptual model in modern operational weather forecasting, especially in connection with the widespread use of numerical forecasts. In order to evaluate the features of the occlusions in operational weather forecasting, all the occlusion processes occurring during year 2003 over Europe and Northern Atlantic area have been investigated using the conceptual model of occlusion and the methods suggested in the FMI. The investigation has yielded a classification of the occluded cyclones on the basis of the extent the conceptual model has fitted the description of the observed thermal structure. The seasonal and geographical distribution of the classes has been inspected. Some relevant cases belonging to different classes have been collected and analyzed in detail: in this deeper investigation tools and techniques, which are not routinely used in operational weather forecasting, have been adopted. Both the statistical investigation of the occluded cyclones during year 2003 and the case studies have revealed that the traditional classification of the types of the occlusion on the basis of the thermal structure doesn t take into account the bigger variety of occlusion structures which can be observed. Moreover the conceptual model of occlusion has turned out to be often inadequate in describing well developed cyclones. A deep and constructive revision of the conceptual model of occlusion is therefore suggested in light of the result obtained in this work. The revision should take into account both the progresses which are being made in building a theoretical footing for the occlusion process and the recent tools and meteorological quantities which are nowadays available.
Resumo:
Väitöskirjatutkimuksessa tarkastellaan Taiwanin politiikkaa ensimmäisen vaalien kautta tapahtuneen vallanvaihdon jälkeen (2000) yhteiskunnan rakenteellisen politisoitumisen näkökulmasta. Koska Taiwanilla siirryttiin verettömästi autoritaarisesta yksipuoluejärjestelmästä monipuoluejärjestelmään sitä on pidetty poliittisen muodonmuutoksen mallioppilaana. Aiempi optimismi Taiwanin demokratisoitumisen suhteen on sittemmin vaihtunut pessimismiin, pitkälti yhteiskunnan voimakkaasta politisoitumisesta johtuen. Tutkimuksessa haetaan selitystä tälle politisoitumiselle. Yhteiskunnan rakenteellisella politisoitumisella tarkoitetaan tilannetta, jossa ”poliittisen” alue kasvaa varsinaisia poliittisia instituutioita laajemmaksi. Rakenteellinen politisoituminen muuttuu helposti yhteiskunnalliseksi ongelmaksi, koska siitä usein seuraa normaalin poliittisen toiminnan (esim. lainsäädännän) jähmettyminen, yhteiskunnan jyrkkä jakautuminen, alhainen kynnys poliittisille konflikteille ja yleisen yhteiskunnallisen luottamuksen alentuminen. Toisin kuin esimerkiksi Itä-Euroopassa, Taiwanissa entinen valtapuolue ei romahtanut poliittisen avautumisen myötä vaan säilytti vahvan rakenteellisen asemansa. Kun valta vaihtui ensimmäisen kerran vaalien kautta, vanha valtapuolue ei ollut valmis luovuttamaan poliittisen järjestelmän ohjaksia käsistään. Alkoi vuosia kestänyt taistelu järjestelmän hallinnasta vanhan ja uuden valtapuolueen välillä, jossa yhteiskunta politisoitui voimakkaasti. Tutkimuksessa Taiwanin yhteiskunnan politisoituminen selitetään useiden rakenteellisten piirteiden yhteisvaikutuksen tuloksena. Tällaisia politisoitumista edistäviä rakentellisia piirteitä ovat hidas poliittinen muutos, joka säilytti vanhat poliittiset jakolinjat ja niihin liittyvät vahvat edut ja intressit; sopimaton perustuslaki; Taiwanin epäselvä kansainvälinen asema ja jakautunut identiteetti; sekä sosiaalinen rakenne, joka helpottaa ihmisten nopeaa mobilisointia poliittiisiin mielenilmauksiin. Tutkimuksessa kiinnitetään huomiota toistaiseksi vähän tutkittuun poliittiseen ilmiöön, joidenkin demokratisoituvien yhteiskuntien voimakkaaseen rakenteelliseen politisoitumiseen. Tutkimuksen pääasiallinen havainto on, että yksipuoluejärjestelmän demokratisoituminen kantaa sisällään rakenteellisen politisoitumisen siemenen, jos entinen valtapuolue ei romahda demokratisoitumisen myötä.
Resumo:
Ensuring adequate water supply to urban areas is a challenging task due to factors such as rapid urban growth, increasing water demand and climate change. In developing a sustainable water supply system, it is important to identify the dominant water demand factors for any given water supply scheme. This paper applies principal components analysis to identify the factors that dominate residential water demand using the Blue Mountains Water Supply System in Australia as a case study. The results show that the influence of community intervention factors (e.g. use of water efficient appliances and rainwater tanks) on water demand are among the most significant. The result also confirmed that the community intervention programmes and water pricing policy together can play a noticeable role in reducing the overall water demand. On the other hand, the influence of rainfall on water demand is found to be very limited, while temperature shows some degree of correlation with water demand. The results of this study would help water authorities to plan for effective water demand management strategies and to develop a water demand forecasting model with appropriate climatic factors to achieve sustainable water resources management. The methodology developed in this paper can be adapted to other water supply systems to identify the influential factors in water demand modelling and to devise an effective demand management strategy.
Resumo:
Urban sprawl is the outgrowth along the periphery of cities and along highways. Although an accurate definition of urban sprawl may be debated, a consensus is that urban sprawl is characterized by an unplanned and uneven pattern of growth, driven by multitude of processes and leading to inefficient resource utilization. Urbanization in India has never been as rapid as it is in recent times. As one of the fastest growing economies in the world, India faces stiff challenges in managing the urban sprawl, while ensuring effective delivery of basic services in urban areas. The urban areas contribute significantly to the national economy (more than 50% of GDP), while facing critical challenges in accessing basic services and necessary infrastructure, both social and economic. The overall rise in the population of the urban poor or the increase in travel times due to congestion along road networks are indicators of the effectiveness of planning and governance in assessing and catering for this demand. Agencies of governance at all levels: local bodies, state government and federal government, are facing the brunt of this rapid urban growth. It is imperative for planning and governance to facilitate, augment and service the requisite infrastructure over time systematically. Provision of infrastructure and assurance of the delivery of basic services cannot happen overnight and hence planning has to facilitate forecasting and service provision with appropriate financial mechanisms.
Resumo:
The ongoing rapid fragmentation of tropical forests is a major threat to global biodiversity. This is because many of the tropical forests are so-called biodiversity 'hotspots', areas that host exceptional species richness and concentrations of endemic species. Forest fragmentation has negative ecological and genetic consequences for plant survival. Proposed reasons for plant species' loss in forest fragments are, e.g., abiotic edge effects, altered species interactions, increased genetic drift, and inbreeding depression. To be able to conserve plants in forest fragments, the ecological and genetic processes that threaten the species have to be understood. That is possible only after obtaining adequate information on their biology, including taxonomy, life history, reproduction, and spatial and genetic structure of the populations. In this research, I focused on the African violet (genus Saintpaulia), a little-studied conservation flagship from the Eastern Arc Mountains and Coastal Forests hotspot of Tanzania and Kenya. The main objective of the research was to increase understanding of the life history, ecology and population genetics of Saintpaulia that is needed for the design of appropriate conservation measures. A further aim was to provide population-level insights into the difficult taxonomy of Saintpaulia. Ecological field work was conducted in a relatively little fragmented protected forest in the Amani Nature Reserve in the East Usambara Mountains, in northeastern Tanzania, complemented by population genetic laboratory work and ecological experiments in Helsinki, Finland. All components of the research were conducted with Saintpaulia ionantha ssp. grotei, which forms a taxonomically controversial population complex in the study area. My results suggest that Saintpaulia has good reproductive performance in forests with low disturbance levels in the East Usambara Mountains. Another important finding was that seed production depends on sufficient pollinator service. The availability of pollinators should thus be considered in the in situ management of threatened populations. Dynamic population stage structures were observed suggesting that the studied populations are demographically viable. High mortality of seedlings and juveniles was observed during the dry season but this was compensated by ample recruitment of new seedlings after the rainy season. Reduced tree canopy closure and substrate quality are likely to exacerbate seedling and juvenile mortality, and, therefore, forest fragmentation and disturbance are serious threats to the regeneration of Saintpaulia. Restoration of sufficient shade to enhance seedling establishment is an important conservation measure in populations located in disturbed habitats. Long-term demographic monitoring, which enables the forecasting of a population s future, is also recommended in disturbed habitats. High genetic diversities were observed in the populations, which suggest that they possess the variation that is needed for evolutionary responses in a changing environment. Thus, genetic management of the studied populations does not seem necessary as long as the habitats remain favourable for Saintpaulia. The observed high levels of inbreeding in some of the populations, and the reduced fitness of the inbred progeny compared to the outbred progeny, as revealed by the hand-pollination experiment, indicate that inbreeding and inbreeding depression are potential mechanisms contributing to the extinction of Saintpaulia populations. The relatively weak genetic divergence of the three different morphotypes of Saintpaulia ionantha ssp. grotei lend support to the hypothesis that the populations in the Usambara/lowlands region represent a segregating metapopulation (or metapopulations), where subpopulations are adapting to their particular environments. The partial genetic and phenological integrity, and the distinct trailing habit of the morphotype 'grotei' would, however, justify its placement in a taxonomic rank of its own, perhaps in a subspecific rank.
Resumo:
Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
The hype cycle model traces the evolution of technological innovations as they pass through successive stages pronounced by the peak, disappointment, and recovery of expectations. Since its introduction by Gartner nearly two decades ago, the model has received growing interest from practitioners, and more recently from scholars. Given the model's proclaimed capacity to forecast technological development, an important consideration for organizations in formulating marketing strategies, this paper provides a critical review of the hype cycle model by seeking evidence from Gartner's own technology databases for the manifestation of hype cycles. The results of our empirical work show incongruences connected with the reports of Gartner, which motivates us to consider possible future directions, whereby the notion of hype or hyped dynamics (though not necessarily the hype cycle model itself) can be captured in existing life cycle models through the identification of peak, disappointment, and recovery patterns.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
The research topic is the formation of nuclear family understanding and the politicization of nuclear family. Thus, the question is how did family historically become understood particularly as nuclear family and why did it become central in terms of politics and social? The research participates in discussions on the concept and phenomena of family. Central theme of analysis is to ask what is family? Family is seen as historically contingent and the discussions on the concept and phenomena are done via historical analysis. Center of attention is nuclear family, thus, a distinction between the concepts of family and nuclear family is made to be able to focus on historically specific phenomena of nuclear family. Family contrary to the concept of nuclear family -- in general is seen to be able to refer to families in all times and all cultures, as well as all types of families in our times and culture. The nuclear family understanding is examined through two separate themes, that of parent-child relationships and marital relations. Two simultaneous processes give nuclear family relations its current form: on the one hand the marital couple as the basis of family is eroding and losing its capacity to hold the family together; on the other, in Finland at least from 1950s on, the normal development of the child has became to be seen ontologically bound to the (biological) mother and (via her to) the father. In the nucleus of the family is the child: the biological, psychological and social processes of normal development are seen ontologically bound to the nuclear family relations. Thus, marriages can collapse, but nuclear family is unbreakable. What is interesting is the historical timing: as nuclear family relations had just been born, the marriage dived to a crisis. The concept and phenomena of nuclear family is analyzed in the context of social and politics (in Finnish these two collapses in the concept of yhteiskunnallinen , which refers both to a society as natural processes as well as to the state in terms of politics). Family is political and social in two senses. First, it is understood as the natural origin of the social and society. Human, by definition, is understood as a social being and the origin of social, in turn, is seen to be in the family. Family is seen as natural to species. Disturbances in family life lead to un-social behaviour. Second, family is also seen as a political actor of rights and obligations: family is obligated to control the life of its members. The state patronage is seen at the same time inevitable family life is way too precious to leave alone -- and problematic as it seems to disturb the natural processes of the family or to erode the autonomy of it. The rigueur of the nuclear family is in the role it seems to hold in the normal development of the child and the future of the society. The disturbances in the families first affect the child, then the society. In terms of possibility to re-think the family the natural and political collide: the nuclear family seems as natural, unchangeable, un- negotiable. Nuclear family is historically ontologised. The biological, psychological and social facts of family seem to be contrary to the idea of negotiation and politics the natural facts of family problematise the politics of family. The research material consists of administrational documents, memoranda, consultation documents, seminar reports, educational writings, guidebooks and newspaper articles in family politics between 1950s and 1990s.
Resumo:
In Somalia the central government collapsed in 1991 and since then state failure became a widespread phenomenon and one of the greatest political and humanitarian problems facing the world in this century. Thus, the main objective of this research is to answer the following question: What went wrong? Most of the existing literature on the political economy of conflict starts from the assumption that state in Africa is predatory by nature. Unlike these studies, the present research, although it uses predation theory, starts from the social contract approach of state definition. Therefore, rather than contemplating actions and policies of the rulers alone, this approach allows us to deliberately bring the role of the society – as citizens – and other players into the analyses. In Chapter 1, after introducing the study, a simple principal-agent model will be developed to check the logical consistence of the argument and to make the identification of causal mechanism easier. I also identify three main actors in the process of state failure in Somalia: the Somali state, Somali society and the superpowers. In Chapter 2, so as to understand the incentives, preferences and constraints of each player in the state failure game, I in some depth analyse the evolution and structure of three central informal institutions: identity based patronage system of leadership, political tribalism, and the Cold War. These three institutions are considered as the rules of the game in the Somali state failure. Chapter 3 summarises the successive civilian governments’ achievements and failures (1960-69) concerning the main national goals, national unification and socio-economic development. Chapter 4 shows that the military regime, although it assumed power through extralegal means, served to some extent the developmental interest of the citizens in the first five years of its rule. Chapter 5 shows the process, and the factors involved, of the military regime’s self-transformation from being an agent for the developmental interests of the society to a predatory state that not only undermines the interests of the society but that also destroys the state itself. Chapter 6 addresses the process of disintegration of the post-colonial state of Somalia. The chapter shows how the regime’s merciless reactions to political ventures by power-seeking opposition leaders shattered the entire country and wrecked the state institutions. Chapter 7 concludes the study by summarising the main findings: due to the incentive structures generated by the informal institutions, the formal state institutions fell apart.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
First, in Essay 1, we test whether it is possible to forecast Finnish Options Index return volatility by examining the out-of-sample predictive ability of several common volatility models with alternative well-known methods; and find additional evidence for the predictability of volatility and for the superiority of the more complicated models over the simpler ones. Secondly, in Essay 2, the aggregated volatility of stocks listed on the Helsinki Stock Exchange is decomposed into a market, industry-and firm-level component, and it is found that firm-level (i.e., idiosyncratic) volatility has increased in time, is more substantial than the two former, predicts GDP growth, moves countercyclically and as well as the other components is persistent. Thirdly, in Essay 3, we are among the first in the literature to seek for firm-specific determinants of idiosyncratic volatility in a multivariate setting, and find for the cross-section of stocks listed on the Helsinki Stock Exchange that industrial focus, trading volume, and block ownership, are positively associated with idiosyncratic volatility estimates––obtained from both the CAPM and the Fama and French three-factor model with local and international benchmark portfolios––whereas a negative relation holds between firm age as well as size and idiosyncratic volatility.
Resumo:
The low predictive power of implied volatility in forecasting the subsequently realized volatility is a well-documented empirical puzzle. As suggested by e.g. Feinstein (1989), Jackwerth and Rubinstein (1996), and Bates (1997), we test whether unrealized expectations of jumps in volatility could explain this phenomenon. Our findings show that expectations of infrequently occurring jumps in volatility are indeed priced in implied volatility. This has two important consequences. First, implied volatility is actually expected to exceed realized volatility over long periods of time only to be greatly less than realized volatility during infrequently occurring periods of very high volatility. Second, the slope coefficient in the classic forecasting regression of realized volatility on implied volatility is very sensitive to the discrepancy between ex ante expected and ex post realized jump frequencies. If the in-sample frequency of positive volatility jumps is lower than ex ante assessed by the market, the classic regression test tends to reject the hypothesis of informational efficiency even if markets are informationally effective.