924 resultados para Bouguer anomaly
Resumo:
The recovery of the Arctic polar vortex following stratospheric sudden warmings is found to take upward of 3 months in a particular subset of cases, termed here polar-night jet oscillation (PJO) events. The anomalous zonal-mean circulation above the pole during this recovery is characterized by a persistently warm lower stratosphere, and above this a cold midstratosphere and anomalously high stratopause, which descends as the event unfolds. Composites of these events in the Canadian Middle Atmosphere Model show the persistence of the lower-stratospheric anomaly is a result of strongly suppressed wave driving and weak radiative cooling at these heights. The upper-stratospheric and lower-mesospheric anomalies are driven immediately following the warming by anomalous planetary-scale eddies, following which, anomalous parameterized nonorographic and orographic gravity waves play an important role. These details are found to be robust for PJO events (as opposed to sudden warmings in general) in that many details of individual PJO events match the composite mean. Azonal-mean quasigeostrophic model on the sphere is shown to reproduce the response to the thermal and mechanical forcings produced during a PJO event. The former is well approximated by Newtonian cooling. The response can thus be considered as a transient approach to the steady-state, downward control limit. In this context, the time scale of the lower-stratospheric anomaly is determined by the transient, radiative response to the extended absence of wave driving. The extent to which the dynamics of the wave-driven descent of the stratopause can be considered analogous to the descending phases of the quasi-biennial oscillation (QBO) is also discussed.
Resumo:
A novel diagnostic tool is presented, based on polar-cap temperature anomalies, for visualizing daily variability of the Arctic stratospheric polar vortex over multiple decades. This visualization illustrates the ubiquity of extended-time-scale recoveries from stratospheric sudden warmings, termed here polar-night jet oscillation (PJO) events. These are characterized by an anomalously warm polar lower stratosphere that persists for several months. Following the initial warming, a cold anomaly forms in the middle stratosphere, as does an anomalously high stratopause, both of which descend while the lower-stratospheric anomaly persists. These events are characterized in four datasets: Microwave Limb Sounder (MLS) temperature observations; the 40-yr ECMWF Re-Analysis (ERA-40) and Modern Era Retrospective Analysis for Research and Applications (MERRA) reanalyses; and an ensemble of three 150-yr simulations from the Canadian Middle Atmosphere Model. The statistics of PJO events in the model are found to agree very closely with those of the observations and reanalyses. The time scale for the recovery of the polar vortex following sudden warmings correlates strongly with the depth to which the warming initially descends. PJO events occur following roughly half of all major sudden warmings and are associated with an extended period of suppressed wave-activity fluxes entering the polar vortex. They follow vortex splits more frequently than they do vortex displacements. They are also related to weak vortex events as identified by the northern annular mode; in particular, those weak vortex events followed by a PJO event show a stronger tropospheric response. The long time scales, predominantly radiative dynamics, and tropospheric influence of PJO events suggest that they represent an important source of conditional skill in seasonal forecasting.
Resumo:
Peatlands are a major terrestrial carbon store and a persistent natural carbon sink during the Holocene, but there is considerable uncertainty over the fate of peatland carbon in a changing climate. It is generally assumed that higher temperatures will increase peat decay, causing a positive feedback to climate warming and contributing to the global positive carbon cycle feedback. Here we use a new extensive database of peat profiles across northern high latitudes to examine spatial and temporal patterns of carbon accumulation over the past millennium. Opposite to expectations, our results indicate a small negative carbon cycle feedback from past changes in the long-term accumulation rates of northern peatlands. Total carbon accumulated over the last 1000 yr is linearly related to contemporary growing season length and photosynthetically active radiation, suggesting that variability in net primary productivity is more important than decomposition in determining long-term carbon accumulation. Furthermore, northern peatland carbon sequestration rate declined over the climate transition from the Medieval Climate Anomaly (MCA) to the Little Ice Age (LIA), probably because of lower LIA temperatures combined with increased cloudiness suppressing net primary productivity. Other factors including changing moisture status, peatland distribution, fire, nitrogen deposition, permafrost thaw and methane emissions will also influence future peatland carbon cycle feedbacks, but our data suggest that the carbon sequestration rate could increase over many areas of northern peatlands in a warmer future.
Resumo:
Although Richard Hooker’s private attitudes were clericalist and authoritarian, his constitutional theory subordinated clergymen to laymen and monarchy to parliamentary statute. This article explains why his political ideas were nonetheless appropriate to his presumed religious purposes. It notes a very intimate connection between his teleological conception of a law and his hostility towards conventional high Calvinist ideas about predestination. The most significant anomaly within his broadly Aristotelian world-view was his belief that politics is nothing but a means to cope with sin. This too can be linked to his religious ends, but it creates an ambiguity that made his doctrines usable by Locke.
Resumo:
The occurrence of strong and persistent mid-latitude anticyclonic ridges over the Eastern North Atlantic is a major contributor to the occurrence of severe winter droughts over Western Iberia. We analyze the development of strong and persistent ridge episodes within 40–50°N; 40°W–5°E, which are defined as 300 hPa geopotential height anomalies above 50 gpm that persist for at least 10 consecutive days. Results suggest that the generation and maintenance of these episodes, with positive stratospheric geopotential anomalies over the North American continent and the adjacent North Pacific, are associated with an intensified polar jet. Such positive anomalies tend to detach from the main stratospheric anomaly and propagate eastwards and downwards as Rossby tropospheric waves. Furthermore, the Eastern North Atlantic ridge is generated and repeatedly reinforced until the stratospheric anomaly dissipates. Results also show evidence for waves breaking anticyclonically during the episodes, which is dynamically coherent with their persistency and quasi-stationarity.
Resumo:
The synoptic evolution and some meteorological impacts of the European winter storm Kyrill that swept across Western, Central, and Eastern Europe between 17 and 19 January 2007 are investigated. The intensity and large storm damage associated with Kyrill is explained based on synoptic and mesoscale environmental storm features, as well as on comparisons to previous storms. Kyrill appeared on weather maps over the US state of Arkansas about four days before it hit Europe. It underwent an explosive intensification over the Western North Atlantic Ocean while crossing a very intense zonal polar jet stream. A superposition of several favourable meteorological conditions west of the British Isles caused a further deepening of the storm when it started to affect Western Europe. Evidence is provided that a favourable alignment of three polar jet streaks and a dry air intrusion over the occlusion and cold fronts were causal factors in maintaining Kyrill's low pressure very far into Eastern Europe. Kyrill, like many other strong European winter storms, was embedded in a pre-existing, anomalously wide, north-south mean sea-level pressure (MSLP) gradient field. In addition to the range of gusts that might be expected from the synoptic-scale pressure field, mesoscale features associated with convective overturning at the cold front are suggested as the likely causes for the extremely damaging peak gusts observed at many lowland stations during the passage of Kyrill's cold front. Compared to other storms, Kyrill was by far not the most intense system in terms of core pressure and circulation anomaly. However, the system moved into a pre-existing strong MSLP gradient located over Central Europe which extended into Eastern Europe. This fact is considered determinant for the anomalously large area affected by Kyrill. Additionally, considerations of windiness in climate change simulations using two state-of-the-art regional climate models driven by ECHAM5 indicate that not only Central, but also Eastern Central Europe may be affected by higher surface wind speeds at the end of the 21st century. These changes are partially associated with the increased pressure gradient over Europe which is identified in the ECHAM5 simulations. Thus, with respect to the area affected, as well as to the synoptic and mesoscale storm features, it is proposed that Kyrill may serve as an interesting study case to assess future storm impacts.
Resumo:
This article examines the potential to improve numerical weather prediction (NWP) by estimating upper and lower bounds on predictability by re-visiting the original study of Lorenz (1982) but applied to the most recent version of the European Centre for Medium Range Weather Forecasts (ECMWF) forecast system, for both the deterministic and ensemble prediction systems (EPS). These bounds are contrasted with an older version of the same NWP system to see how they have changed with improvements to the NWP system. The computations were performed for the earlier seasons of DJF 1985/1986 and JJA 1986 and the later seasons of DJF 2010/2011 and JJA 2011 using the 500-hPa geopotential height field. Results indicate that for this field, we may be approaching the limit of deterministic forecasting so that further improvements might only be obtained by improving the initial state. The results also show that predictability calculations with earlier versions of the model may overestimate potential forecast skill, which may be due to insufficient internal variability in the model and because recent versions of the model are more realistic in representing the true atmospheric evolution. The same methodology is applied to the EPS to calculate upper and lower bounds of predictability of the ensemble mean forecast in order to explore how ensemble forecasting could extend the limits of the deterministic forecast. The results show that there is a large potential to improve the ensemble predictions, but for the increased predictability of the ensemble mean, there will be a trade-off in information as the forecasts will become increasingly smoothed with time. From around the 10-d forecast time, the ensemble mean begins to converge towards climatology. Until this point, the ensemble mean is able to predict the main features of the large-scale flow accurately and with high consistency from one forecast cycle to the next. By the 15-d forecast time, the ensemble mean has lost information with the anomaly of the flow strongly smoothed out. In contrast, the control forecast is much less consistent from run to run, but provides more detailed (unsmoothed) but less useful information.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
A new record of sea surface temperature (SST) for climate applications is described. This record provides independent corroboration of global variations estimated from SST measurements made in situ. Infrared imagery from Along-Track Scanning Radiometers (ATSRs) is used to create a 20 year time series of SST at 0.1° latitude-longitude resolution, in the ATSR Reprocessing for Climate (ARC) project. A very high degree of independence of in situ measurements is achieved via physics-based techniques. Skin SST and SST estimated for 20 cm depth are provided, with grid cell uncertainty estimates. Comparison with in situ data sets establishes that ARC SSTs generally have bias of order 0.1 K or smaller. The precision of the ARC SSTs is 0.14 K during 2003 to 2009, from three-way error analysis. Over the period 1994 to 2010, ARC SSTs are stable, with better than 95% confidence, to within 0.005 K yr−1(demonstrated for tropical regions). The data set appears useful for cleanly quantifying interannual variability in SST and major SST anomalies. The ARC SST global anomaly time series is compared to the in situ-based Hadley Centre SST data set version 3 (HadSST3). Within known uncertainties in bias adjustments applied to in situ measurements, the independent ARC record and HadSST3 present the same variations in global marine temperature since 1996. Since the in situ observing system evolved significantly in its mix of measurement platforms and techniques over this period, ARC SSTs provide an important corroboration that HadSST3 accurately represents recent variability and change in this essential climate variable.
Resumo:
Sea surface temperature (SST) measurements are required by operational ocean and atmospheric forecasting systems to constrain modeled upper ocean circulation and thermal structure. The Global Ocean Data Assimilation Experiment (GODAE) High Resolution SST Pilot Project (GHRSST-PP) was initiated to address these needs by coordinating the provision of accurate, high-resolution, SST products for the global domain. The pilot project is now complete, but activities continue within the Group for High Resolution SST (GHRSST). The pilot project focused on harmonizing diverse satellite and in situ data streams that were indexed, processed, quality controlled, analyzed, and documented within a Regional/Global Task Sharing (R/GTS) framework implemented in an internationally distributed manner. Data with meaningful error estimates developed within GHRSST are provided by services within R/GTS. Currently, several terabytes of data are processed at international centers daily, creating more than 25 gigabytes of product. Ensemble SST analyses together with anomaly SST outputs are generated each day, providing confidence in SST analyses via diagnostic outputs. Diagnostic data sets are generated and Web interfaces are provided to monitor the quality of observation and analysis products. GHRSST research and development projects continue to tackle problems of instrument calibration, algorithm development, diurnal variability, skin temperature deviation, and validation/verification of GHRSST products. GHRSST also works closely with applications and users, providing a forum for discussion and feedback between SST users and producers on a regular basis. All data within the GHRSST R/GTS framework are freely available. This paper reviews the progress of GHRSST-PP, highlighting achievements that have been fundamental to the success of the pilot project.
Resumo:
To study the transient atmospheric response to midlatitude SST anomalies, a three-layer quasigeostrophic (QG) model coupled to a slab oceanic mixed layer in the North Atlantic is used. As diagnosed from a coupled run in perpetual winter conditions, the first two modes of SST variability are linked to the model North Atlantic Oscillation (NAO) and eastern Atlantic pattern (EAP), respectively, the dominant atmospheric modes in the Atlantic sector. The two SST anomaly patterns are then prescribed as fixed anomalous boundary conditions for the model atmosphere, and its transient responses are established from a large ensemble of simulations. In both cases, the tendency of the air–sea heat fluxes to damp the SST anomalies results in an anomalous diabatic heating of the atmosphere that, in turn, forces a baroclinic response, as predicted by linear theory. This initial response rapidly modifies the transient eddy activity and thus the convergence of eddy momentum and heat fluxes. The latter transforms the baroclinic response into a growing barotropic one that resembles the atmospheric mode that had created the SST anomaly in the coupled run and is thus associated with a positive feedback. The total adjustment time is as long as 3–4 months for the NAO-like response and 1–2 months for the EAP-like one. The positive feedback, in both cases, is dependent on the polarity of the SST anomaly, but is stronger in the NAO case, thereby contributing to its predominance at low frequency in the coupled system. However, the feedback is too weak to lead to an instability of the atmospheric modes and primarily results in an increase of their amplitude and persistence and a weakening of the heat flux damping of the SST anomaly.
Resumo:
The RAPID-MOCHA array has observed the Atlantic Meridional overturning circulation (AMOC) at 26.5°N since 2004. During 2009/2010, there was a transient 30% weakening of the AMOC driven by anomalies in geostrophic and Ekman transports. Here, we use simulations based on the Met Office Forecast Ocean Assimilation Model (FOAM) to diagnose the relative importance of atmospheric forcings and internal ocean dynamics in driving the anomalous geostrophic circulation of 2009/10. Data assimilating experiments with FOAM accurately reproduce the mean strength and depth of the AMOC at 26.5°N. In addition, agreement between simulated and observed stream functions in the deep ocean is improved when we calculate the AMOC using a method that approximates the RAPID observations. The main features of the geostrophic circulation anomaly are captured by an ensemble of simulations without data-assimilation. These model results suggest that the atmosphere played a dominant role in driving recent interannual variability of the AMOC.
Resumo:
The global mean temperature in 2008 was slightly cooler than that in 2007; however, it still ranks within the 10 warmest years on record. Annual mean temperatures were generally well above average in South America, northern and southern Africa, Iceland, Europe, Russia, South Asia, and Australia. In contrast, an exceptional cold outbreak occurred during January across Eurasia and over southern European Russia and southern western Siberia. There has been a general increase in land-surface temperatures and in permafrost temperatures during the last several decades throughout the Arctic region, including increases of 1° to 2°C in the last 30 to 35 years in Russia. Record setting warm summer (JJA) air temperatures were observed throughout Greenland. The year 2008 was also characterized by heavy precipitation in a number of regions of northern South America, Africa, and South Asia. In contrast, a prolonged and intense drought occurred during most of 2008 in northern Argentina, Paraguay, Uruguay, and southern Brazil, causing severe impacts to agriculture and affecting many communities. The year began with a strong La Niña episode that ended in June. Eastward surface current anomalies in the tropical Pacific Ocean in early 2008 played a major role in adjusting the basin from strong La Niña conditions to ENSO-neutral conditions by July–August, followed by a return to La Niña conditions late in December. The La Niña conditions resulted in far-reaching anomalies such as a cooling in the central tropical Pacific, Arctic Ocean, and the regions extending from the Gulf of Alaska to the west coast of North America; changes in the sea surface salinity and heat content anomalies in the tropics; and total column water vapor, cloud cover, tropospheric temperature, and precipitation patterns typical of a La Niña. Anomalously salty ocean surface salinity values in climatologically drier locations and anomalously fresh values in rainier locations observed in recent years generally persisted in 2008, suggesting an increase in the hydrological cycle. The 2008 Atlantic hurricane season was the 14th busiest on record and the only season ever recorded with major hurricanes each month from July through November. Conversely, activity in the northwest Pacific was considerably below normal during 2008. While activity in the north Indian Ocean was only slightly above average, the season was punctuated by Cyclone Nargis, which killed over 145,000 people; in addition, it was the seventh-strongest cyclone ever in the basin and the most devastating to hit Asia since 1991. Greenhouse gas concentrations continued to rise, increasing by more than expected based on with CO2 the 1979 to 2007 trend. In the oceans, the global mean uptake for 2007 is estimated to be 1.67 Pg-C, about CO2 0.07 Pg-C lower than the long-term average, making it the third-largest anomaly determined with this method since 1983, with the largest uptake of carbon over the past decade coming from the eastern Indian Ocean. Global phytoplankton chlorophyll concentrations were slightly elevated in 2008 relative to 2007, but regional changes were substantial (ranging to about 50%) and followed long-term patterns of net decreases in chlorophyll with increasing sea surface temperature. Ozone-depleting gas concentrations continued to fall globally to about 4% below the peak levels of the 2000–02 period. Total column ozone concentrations remain well below pre-1980, levels and the 2008 ozone hole was unusually large (sixth worst on record) and persistent, with low ozone values extending into the late December period. In fact the polar vortex in 2008 persisted longer than for any previous year since 1979. Northern Hemisphere snow cover extent for the year was well below average due in large part to the record-low ice extent in March and despite the record-maximum coverage in January and the shortest snow cover duration on record (which started in 1966) in the North American Arctic. Limited preliminary data imply that in 2008 glaciers continued to lose mass, and full data for 2007 show it was the 17th consecutive year of loss. The northern region of Greenland and adjacent areas of Arctic Canada experienced a particularly intense melt season, even though there was an abnormally cold winter across Greenland's southern half. One of the most dramatic signals of the general warming trend was the continued significant reduction in the extent of the summer sea-ice cover and, importantly, the decrease in the amount of relatively older, thicker ice. The extent of the 2008 summer sea-ice cover was the second-lowest value of the satellite record (which started in 1979) and 36% below the 1979–2000 average. Significant losses in the mass of ice sheets and the area of ice shelves continued, with several fjords on the northern coast of Ellesmere Island being ice free for the first time in 3,000–5,500 years. In Antarctica, the positive phase of the SAM led to record-high total sea ice extent for much of early 2008 through enhanced equatorward Ekman transport. With colder continental temperatures at this time, the 2007–08 austral summer snowmelt season was dramatically weakened, making it the second shortest melt season since 1978 (when the record began). There was strong warming and increased precipitation along the Antarctic Peninsula and west Antarctica in 2008, and also pockets of warming along coastal east Antarctica, in concert with continued declines in sea-ice concentration in the Amundsen/Bellingshausen Seas. One significant event indicative of this warming was the disintegration and retreat of the Wilkins Ice Shelf in the southwest peninsula area of Antarctica.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
In projections of twenty-first century climate, Arctic sea ice declines and at the same time exhibits strong interannual anomalies. Here, we investigate the potential to predict these strong sea-ice anomalies under a perfect-model assumption, using the Max-Planck-Institute Earth System Model in the same setup as in the Coupled Model Intercomparison Project Phase 5 (CMIP5). We study two cases of strong negative sea-ice anomalies: a 5-year-long anomaly for present-day conditions, and a 10-year-long anomaly for conditions projected for the middle of the twenty-first century. We treat these anomalies in the CMIP5 projections as the truth, and use exactly the same model configuration for predictions of this synthetic truth. We start ensemble predictions at different times during the anomalies, considering lagged-perfect and sea-ice-assimilated initial conditions. We find that the onset and amplitude of the interannual anomalies are not predictable. However, the further deepening of the anomaly can be predicted for typically 1 year lead time if predictions start after the onset but before the maximal amplitude of the anomaly. The magnitude of an extremely low summer sea-ice minimum is hard to predict: the skill of the prediction ensemble is not better than a damped-persistence forecast for lead times of more than a few months, and is not better than a climatology forecast for lead times of two or more years. Predictions of the present-day anomaly are more skillful than predictions of the mid-century anomaly. Predictions using sea-ice-assimilated initial conditions are competitive with those using lagged-perfect initial conditions for lead times of a year or less, but yield degraded skill for longer lead times. The results presented here suggest that there is limited prospect of predicting the large interannual sea-ice anomalies expected to occur throughout the twenty-first century.