933 resultados para Ensemble dominant connexe
Resumo:
Instrumental observations, palaeo-proxies, and climate models suggest significant decadal variability within the North Atlantic subpolar gyre (NASPG). However, a poorly sampled observational record and a diversity of model behaviours mean that the precise nature and mechanisms of this variability are unclear. Here, we analyse an exceptionally large multi-model ensemble of 42 present-generation climate models to test whether NASPG mean state biases systematically affect the representation of decadal variability. Temperature and salinity biases in the Labrador Sea co-vary and influence whether density variability is controlled by temperature or salinity variations. Ocean horizontal resolution is a good predictor of the biases and the location of the dominant dynamical feedbacks within the NASPG. However, we find no link to the spectral characteristics of the variability. Our results suggest that the mean state and mechanisms of variability within the NASPG are not independent. This represents an important caveat for decadal predictions using anomaly-assimilation methods.
Resumo:
This study has investigated serial (temporal) clustering of extra-tropical cyclones simulated by 17 climate models that participated in CMIP5. Clustering was estimated by calculating the dispersion (ratio of variance to mean) of 30 December-February counts of Atlantic storm tracks passing nearby each grid point. Results from single historical simulations of 1975-2005 were compared to those from historical ERA40 reanalyses from 1958-2001 ERA40 and single future model projections of 2069-2099 under the RCP4.5 climate change scenario. Models were generally able to capture the broad features in reanalyses reported previously: underdispersion/regularity (i.e. variance less than mean) in the western core of the Atlantic storm track surrounded by overdispersion/clustering (i.e. variance greater than mean) to the north and south and over western Europe. Regression of counts onto North Atlantic Oscillation (NAO) indices revealed that much of the overdispersion in the historical reanalyses and model simulations can be accounted for by NAO variability. Future changes in dispersion were generally found to be small and not consistent across models. The overdispersion statistic, for any 30 year sample, is prone to large amounts of sampling uncertainty that obscures the climate change signal. For example, the projected increase in dispersion for storm counts near London in the CNRMCM5 model is 0.1 compared to a standard deviation of 0.25. Projected changes in the mean and variance of NAO are insufficient to create changes in overdispersion that are discernible above natural sampling variations.
Resumo:
Debate over the late Quaternary megafaunal extinctions has focussed on whether human colonisation or climatic changes were more important drivers of extinction, with few extinctions being unambiguously attributable to either. Most analyses have been geographically or taxonomically restricted and the few quantitative global analyses have been limited by coarse temporal resolution or overly simplified climate reconstructions or proxies. We present a global analysis of the causes of these extinctions which uses high-resolution climate reconstructions and explicitly investigates the sensitivity of our results to uncertainty in the palaeological record. Our results show that human colonisation was the dominant driver of megafaunal extinction across the world but that climatic factors were also important. We identify the geographic regions where future research is likely to have the most impact, with our models reliably predicting extinctions across most of the world, with the notable exception of mainland Asia where we fail to explain the apparently low rate of extinction found in in the fossil record. Our results are highly robust to uncertainties in the palaeological record, and our main conclusions are unlikely to change qualitatively following minor improvements or changes in the dates of extinctions and human colonisation.
Resumo:
Forensic taphonomy involves the use of decomposition to estimate postmortem interval (PMI) or locate clandestine graves. Yet, cadaver decomposition remains poorly understood, particularly following burial in soil. Presently, we do not know how most edaphic and environmental parameters, including soil moisture, influence the breakdown of cadavers following burial and alter the processes that are used to estimate PMI and locate clandestine graves. To address this, we buried juvenile rat (Rattus rattus) cadavers (∼18 g wet weight) in three contrasting soils from tropical savanna ecosystems located in Pallarenda (sand), Wambiana (medium clay), or Yabulu (loamy sand), Queensland, Australia. These soils were sieved (2 mm), weighed (500 g dry weight), calibrated to a matric potential of -0.01 megapascals (MPa), -0.05 MPa, or -0.3 MPa (wettest to driest) and incubated at 22 °C. Measurements of cadaver decomposition included cadaver mass loss, carbon dioxide-carbon (CO2-C) evolution, microbial biomass carbon (MBC), protease activity, phosphodiesterase activity, ninhydrin-reactive nitrogen (NRN) and soil pH. Cadaver burial resulted in a significant increase in CO2-C evolution, MBC, enzyme activities, NRN and soil pH. Cadaver decomposition in loamy sand and sandy soil was greater at lower matric potentials (wetter soil). However, optimal matric potential for cadaver decomposition in medium clay was exceeded, which resulted in a slower rate of cadaver decomposition in the wettest soil. Slower cadaver decomposition was also observed at high matric potential (-0.3 MPa). Furthermore, wet sandy soil was associated with greater cadaver decomposition than wet fine-textured soil. We conclude that gravesoil moisture content can modify the relationship between temperature and cadaver decomposition and that soil microorganisms can play a significant role in cadaver breakdown. We also conclude that soil NRN is a more reliable indicator of gravesoil than soil pH.
Resumo:
The present study examines knowledge of the discourse-appropriateness of Clitic Right Dislocation (CLRD) in a population of Heritage (HS) and Spanish-dominant Native Speakers in order to test the predictions of the Interface Hypothesis (IH; Sorace 2011). The IH predicts that speakers in language contact situations will experience difficulties with integrating information involving the interface of syntax and discourse modules. CLRD relates a dislocated constituent to a discourse antecedent, requiring integration of syntax and pragmatics. Results from an acceptability judgment task did not support the predictions of the IH. No statistical differences between the HSs’ performance and that of L1-dominant native speakers were evidenced when participants were presented with an offline task. Thus, our study did not find any evidence of “incomplete acquisition” (Montrul 2008) as it pertains to this specific linguistic structure.
Resumo:
A statistical-dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO-CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra-annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter-annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi-model ensembles are needed to provide a better quantification and understanding of the future changes.
Resumo:
The regional climate modelling system PRECIS, was run at 25 km horizontal resolution for 150 years (1949-2099) using global driving data from a five member perturbed physics ensemble (based on the coupled global climate model HadCM3). Output from these simulations was used to investigate projected changes in tropical cyclones (TCs) over Vietnam and the South China Sea due to global warming (under SRES scenario A1B). Thirty year climatological mean periods were used to look at projected changes in future (2069-2098) TCs compared to a 1961-1990 baseline. Present day results were compared qualitatively with IBTrACS observations and found to be reasonably realistic. Future projections show a 20-44 % decrease in TC frequency, although the spatial patterns of change differ between the ensemble members, and an increase of 27-53 % in the amount of TC associated precipitation. No statistically significant changes in TC intensity were found, however, the occurrence of more intense TCs (defined as those with a maximum 10 m wind speed > 35 m/s) was found to increase by 3-9 %. Projected increases in TC associated precipitation are likely caused by increased evaporation and availability of atmospheric water vapour, due to increased sea surface and atmospheric temperature. The mechanisms behind the projected changes in TC frequency are difficult to link explicitly; changes are most likely due to the combination of increased static stability, increased vertical wind shear and decreased upward motion, which suggest a decrease in the tropical overturning circulation.
Resumo:
This study has explored the prediction errors of tropical cyclones (TCs) in the European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) for the Northern Hemisphere summer period for five recent years. Results for the EPS are contrasted with those for the higher-resolution deterministic forecasts. Various metrics of location and intensity errors are considered and contrasted for verification based on IBTrACS and the numerical weather prediction (NWP) analysis (NWPa). Motivated by the aim of exploring extended TC life cycles, location and intensity measures are introduced based on lower-tropospheric vorticity, which is contrasted with traditional verification metrics. Results show that location errors are almost identical when verified against IBTrACS or the NWPa. However, intensity in the form of the mean sea level pressure (MSLP) minima and 10-m wind speed maxima is significantly underpredicted relative to IBTrACS. Using the NWPa for verification results in much better consistency between the different intensity error metrics and indicates that the lower-tropospheric vorticity provides a good indication of vortex strength, with error results showing similar relationships to those based on MSLP and 10-m wind speeds for the different forecast types. The interannual variation in forecast errors are discussed in relation to changes in the forecast and NWPa system and variations in forecast errors between different ocean basins are discussed in terms of the propagation characteristics of the TCs.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
It is for mally proved that the general smoother for nonlinear dynamics can be for mulated as a sequential method, that is, obser vations can be assimilated sequentially during a for ward integration. The general filter can be derived from the smoother and it is shown that the general smoother and filter solutions at the final time become identical, as is expected from linear theor y. Then, a new smoother algorithm based on ensemble statistics is presented and examined in an example with the Lorenz equations. The new smoother can be computed as a sequential algorithm using only for ward-in-time model integrations. It bears a strong resemblance with the ensemble Kalman filter . The difference is that ever y time a new dataset is available during the for ward integration, an analysis is computed for all previous times up to this time. Thus, the first guess for the smoother is the ensemble Kalman filter solution, and the smoother estimate provides an improvement of this, as one would expect a smoother to do. The method is demonstrated in this paper in an intercomparison with the ensemble Kalman filter and the ensemble smoother introduced by van Leeuwen and Evensen, and it is shown to be superior in an application with the Lorenz equations. Finally , a discussion is given regarding the properties of the analysis schemes when strongly non-Gaussian distributions are used. It is shown that in these cases more sophisticated analysis schemes based on Bayesian statistics must be used.
Resumo:
This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach
Resumo:
The ring-shedding process in the Agulhas Current is studied using the ensemble Kalman filter to assimilate geosat altimeter data into a two-layer quasigeostrophic ocean model. The properties of the ensemble Kalman filter are further explored with focus on the analysis scheme and the use of gridded data. The Geosat data consist of 10 fields of gridded sea-surface height anomalies separated 10 days apart that are added to a climatic mean field. This corresponds to a huge number of data values, and a data reduction scheme must be applied to increase the efficiency of the analysis procedure. Further, it is illustrated how one can resolve the rank problem occurring when a too large dataset or a small ensemble is used.
Resumo:
Quantifying the effect of the seawater density changes on sea level variability is of crucial importance for climate change studies, as the sea level cumulative rise can be regarded as both an important climate change indicator and a possible danger for human activities in coastal areas. In this work, as part of the Ocean Reanalysis Intercomparison Project, the global and regional steric sea level changes are estimated and compared from an ensemble of 16 ocean reanalyses and 4 objective analyses. These estimates are initially compared with a satellite-derived (altimetry minus gravimetry) dataset for a short period (2003–2010). The ensemble mean exhibits a significant high correlation at both global and regional scale, and the ensemble of ocean reanalyses outperforms that of objective analyses, in particular in the Southern Ocean. The reanalysis ensemble mean thus represents a valuable tool for further analyses, although large uncertainties remain for the inter-annual trends. Within the extended intercomparison period that spans the altimetry era (1993–2010), we find that the ensemble of reanalyses and objective analyses are in good agreement, and both detect a trend of the global steric sea level of 1.0 and 1.1 ± 0.05 mm/year, respectively. However, the spread among the products of the halosteric component trend exceeds the mean trend itself, questioning the reliability of its estimate. This is related to the scarcity of salinity observations before the Argo era. Furthermore, the impact of deep ocean layers is non-negligible on the steric sea level variability (22 and 12 % for the layers below 700 and 1500 m of depth, respectively), although the small deep ocean trends are not significant with respect to the products spread.
Resumo:
Accurate knowledge of the location and magnitude of ocean heat content (OHC) variability and change is essential for understanding the processes that govern decadal variations in surface temperature, quantifying changes in the planetary energy budget, and developing constraints on the transient climate response to external forcings. We present an overview of the temporal and spatial characteristics of OHC variability and change as represented by an ensemble of dynamical and statistical ocean reanalyses (ORAs). Spatial maps of the 0–300 m layer show large regions of the Pacific and Indian Oceans where the interannual variability of the ensemble mean exceeds ensemble spread, indicating that OHC variations are well-constrained by the available observations over the period 1993–2009. At deeper levels, the ORAs are less well-constrained by observations with the largest differences across the ensemble mostly associated with areas of high eddy kinetic energy, such as the Southern Ocean and boundary current regions. Spatial patterns of OHC change for the period 1997–2009 show good agreement in the upper 300 m and are characterized by a strong dipole pattern in the Pacific Ocean. There is less agreement in the patterns of change at deeper levels, potentially linked to differences in the representation of ocean dynamics, such as water mass formation processes. However, the Atlantic and Southern Oceans are regions in which many ORAs show widespread warming below 700 m over the period 1997–2009. Annual time series of global and hemispheric OHC change for 0–700 m show the largest spread for the data sparse Southern Hemisphere and a number of ORAs seem to be subject to large initialization ‘shock’ over the first few years. In agreement with previous studies, a number of ORAs exhibit enhanced ocean heat uptake below 300 and 700 m during the mid-1990s or early 2000s. The ORA ensemble mean (±1 standard deviation) of rolling 5-year trends in full-depth OHC shows a relatively steady heat uptake of approximately 0.9 ± 0.8 W m−2 (expressed relative to Earth’s surface area) between 1995 and 2002, which reduces to about 0.2 ± 0.6 W m−2 between 2004 and 2006, in qualitative agreement with recent analysis of Earth’s energy imbalance. There is a marked reduction in the ensemble spread of OHC trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints. The current work emphasizes the need to better observe the deep ocean, both for providing observational constraints for future ocean state estimation efforts and also to develop improved models and data assimilation methods.