956 resultados para Integral turbulent time scales
Resumo:
Salt transport in the Irminger Current and thus the coupling between eastern and western subpolar North Atlantic plays an important role for climate variability across a wide range of time scales. High-resolution ocean modeling and observations indicate that 5 salinities in the eastern subpolar North Atlantic decrease with enhanced circulation of the North Atlantic subpolar gyre (SPG). This has led to the perception that a stronger SPG also transports less salt westward. In this study, we analyze a regional ocean model and a comprehensive global coupled climate model, and show that a stronger SPG transports more salt in the Irminger Current irrespective of lower salinities in its 10 source region. The additional salt converges in the Labrador Sea and the Irminger Basin by eddy transports, increases surface salinity in the western SPG, and favors more intense deep convection. This is part of a positive feedback mechanism with potentially large implications for climate variability and predictability.
Resumo:
Palynology provides the opportunity to make inferences on changes in diversity of terrestrial vegetation over long time scales. The often coarse taxonomic level achievable in pollen analysis, differences in pollen production and dispersal, and the lack of pollen source boundaries hamper the application of diversity indices to palynology. Palynological richness, the number of pollen types at a constant pollen count, is the most robust and widely used diversity indicator for pollen data. However, this index is also influenced by the abundance distribution of pollen types in sediments. In particular, where the index is calculated by rarefaction analysis, information on taxonomic richness at low abundance may be lost. Here we explore information that can be extracted from the accumulation of taxa over consecutive samples. The log-transformed taxa accumulation curve can be broken up into linear sections with different slope and intersect parameters, describing the accumulation of new taxa within the section. The breaking points may indicate changes in the species pool or in the abundance of high versus low pollen producers. Testing this concept on three pollen diagrams from different landscapes, we find that the break points in the taxa accumulation curves provide convenient zones for identifying changes in richness and evenness. The linear regressions over consecutive samples can be used to inter- and extrapolate to low or extremely high pollen counts, indicating evenness and richness in taxonomic composition within these zones. An evenness indicator, based on the rank-order-abundance is used to assist in the evaluation of the results and the interpretation of the fossil records. Two central European pollen diagrams show major changes in the taxa accumulation curves for the Lateglacial period and the time of human induced land-use changes, while they do not indicate strong changes in the species pool with the onset of the Holocene. In contrast, a central Swedish pollen diagram shows comparatively little change, but high richness during the early Holocene forest establishment. Evenness and palynological richness are related for most periods in the three diagrams, however, sections before forest establishment and after forest clearance show high evenness, which is not necessarily accompanied by high palynological richness, encouraging efforts to separate the two.
Resumo:
Motivated by the reported dearth of debris discs around M stars, we use survival models to study the occurrence of planetesimal discs around them. These survival models describe a planetesimal disc with a small number of parameters, determine if it may survive a series of dynamical processes and compute the associated infrared excess. For the Wide-field Infrared Survey Explorer (WISE) satellite, we demonstrate that the dearth of debris discs around M stars may be attributed to the small semimajor axes generally probed if either: (1) the dust grains behave like blackbodies emitting at a peak wavelength coincident with the observed one; (2) or the grains are hotter than predicted by their blackbody temperatures and emit at peak wavelengths that are shorter than the observed one. At these small distances from the M star, planetesimals are unlikely to survive or persist for time-scales of 300 Myr or longer if the disc is too massive. Conversely, our survival models allow for the existence of a large population of low-mass debris discs that are too faint to be detected with current instruments. We gain further confidence in our interpretation by demonstrating the ability to compute infrared excesses for Sun-like stars that are broadly consistent with reported values in the literature. However, our interpretation becomes less clear and large infrared excesses are allowed if only one of these scenarios holds: (3) the dust grains are hotter than blackbody and predominantly emit at the observed wavelength; (4) or are blackbody in nature and emit at peak wavelengths longer than the observed one. Both scenarios imply that the parent planetesimals reside at larger distances from the star than inferred if the dust grains behaved like blackbodies. In all scenarios, we show that the infrared excesses detected at 22 μm (via WISE) and 70 μm (via Spitzer) from AU Mic are easily reconciled with its young age (12 Myr). Conversely, the existence of the old debris disc (2–8 Gyr) from GJ 581 is due to the large semimajor axes probed by the Herschel PACS instrument. We elucidate the conditions under which stellar wind drag may be neglected when considering dust populations around M stars. The WISE satellite should be capable of detecting debris discs around young M stars with ages ∼10 Myr.
Resumo:
A multi-model analysis of Atlantic multidecadal variability is performed with the following aims: to investigate the similarities to observations; to assess the strength and relative importance of the different elements of the mechanism proposed by Delworth et al. (J Clim 6:1993–2011, 1993) (hereafter D93) among coupled general circulation models (CGCMs); and to relate model differences to mean systematic error. The analysis is performed with long control simulations from ten CGCMs, with lengths ranging between 500 and 3600 years. In most models the variations of sea surface temperature (SST) averaged over North Atlantic show considerable power on multidecadal time scales, but with different periodicity. The SST variations are largest in the mid-latitude region, consistent with the short instrumental record. Despite large differences in model configurations, we find quite some consistency among the models in terms of processes. In eight of the ten models the mid-latitude SST variations are significantly correlated with fluctuations in the Atlantic meridional overturning circulation (AMOC), suggesting a link to northward heat transport changes. Consistent with this link, the three models with the weakest AMOC have the largest cold SST bias in the North Atlantic. There is no linear relationship on decadal timescales between AMOC and North Atlantic Oscillation in the models. Analysis of the key elements of the D93 mechanisms revealed the following: Most models present strong evidence that high-latitude winter mixing precede AMOC changes. However, the regions of wintertime convection differ among models. In most models salinity-induced density anomalies in the convective region tend to lead AMOC, while temperature-induced density anomalies lead AMOC only in one model. However, analysis shows that salinity may play an overly important role in most models, because of cold temperature biases in their relevant convective regions. In most models subpolar gyre variations tend to lead AMOC changes, and this relation is strong in more than half of the models.
Resumo:
Since multi-site reconstructions are less affected by site-specific climatic effects and artefacts, regional palaeotemperature reconstructions based on a number of sites can provide more robust estimates of centennial- to millennial-scale temperature trends than individual, site-specific records. Furthermore, reconstructions based on multiple records are necessary for developing continuous climate records over time scales longer than covered by individual sequences. Here, we present a procedure for developing such reconstructions based on relatively short (centuries to millennia), discontinuously sampled records as are typically developed when using biotic proxies in lake sediments for temperature reconstruction. The approach includes an altitudinal correction of temperatures, an interpolation of individual records to equal time intervals, a stacking procedure for sections of the interval of interest that have the same records available, as well as a splicing procedure to link the individual stacked records into a continuous reconstruction. Variations in the final, stacked and spliced reconstruction are driven by variations in the individual records, whereas the absolute temperature values are determined by the stacked segment based on the largest number of records. With numerical simulations based on the NGRIP δ18O record, we demonstrate that the interpolation and stacking procedure provides an approximation of a smoothed palaeoclimate record if based on a sufficient number of discontinuously sampled records. Finally, we provide an example of a stacked and spliced palaeotemperature reconstruction 15000–90 calibrated 14C yr BP based on six chironomid records from the northern and central Swiss Alps and eastern France to discuss the potential and limitations of this approach.
Resumo:
Glacier fluctuations are a key indicator of changing climate. Their reconstruction beyond historical times unravels glacier variability and its forcing factors on long time scales, which can considerably improve our understanding of the climate–glacier relationship. Here, we present a 2250-year-long reconstruction of particle-mass accumulation rates recorded in the lacustrine sediments of Lake Trüebsee (Central Swiss Alps) that are directly related to glacier extent, thus reflecting a continuous record of fluctuations of the upstream-located Titlis Glacier. Mass accumulation rate values show strong centennial to multi-centennial fluctuations and reveal 12 well-pronounced periods of enhanced values corresponding to times of maximum extent of the neighboring Lower Grindelwald Glacier. This result supports previous studies of proglacial lake sediments that documented high mass accumulation rate values during glacier advances. The strong variability in the Lake Trüebsee mass accumulation rate record thus represents a highly sensitive paleoclimatic archive, which mirrors rapid and pronounced feedbacks of Titlis Glacier to climatic changes over the past 2250years. The comparison of our data with independent paleo-temperature reconstructions from tree rings suggests that variations in mean summer temperature were the primary driving factor of fluctuations of Titlis Glacier. Also, advances of Titlis Glacier occurred during the grand solar minima (Dalton, Maunder, Spörer, Wolf) of the last millennium. This relation of glacier extent with summer temperature reveals strong evidence that the mass balance of this Alpine glacier is primarily controlled by the intensity of glacier melting during summer.
Resumo:
Changes in fire occurrence during the last decades in the southern Swiss Alps make knowledge on fire history essential to understand future evolution of the ecosystem composition and functioning. In this context, palaeoecology provides useful insights into processes operating at decadal-to-millennial time scales, such as the response of plant communities to intensified fire disturbances during periods of cultural change. We provide a high-resolution macroscopic charcoal and pollen series from Guèr, a well-dated peat sequence at mid-elevation (832 m.a.s.l.) in southern Switzerland, where the presence of local settlements is documented since the late Bronze Age and the Iron Age. Quantitative fire reconstruction shows that fire activity sharply increased from the Neolithic period (1–3 episodes/1000 year) to the late Bronze and Iron Age (7–9 episodes/1000 year), leading to extensive clearance of the former mixed deciduous forest (Alnus glutinosa, Betula, deciduous Quercus). The increase in anthropogenic pollen indicators (e.g. Cerealia-type, Plantago lanceolata) together with macroscopic charcoal suggests anthropogenic rather than climatic forcing as the main cause of the observed vegetation shift. Fire and controlled burning were extensively used during the late Roman Times and early Middle Ages to promote the introduction and establishment of chestnut (Castanea sativa) stands, which provided an important wood and food supply. Fire occurrence declined markedly (from 9 to 5–6 episodes/1000 year) during late Middle Ages because of fire suppression, biomass removal by human population, and landscape fragmentation. Land-abandonment during the last decades allowed forest to partly re-expand (mainly Alnus glutinosa, Betula) and fire frequency to increase.
Resumo:
This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.
Resumo:
Rapid speciation can occur on ecological time scales and interfere with ecological processes, resulting in species distribution patterns that are difficult to reconcile with ecological theory. The haplochromine cichlids in East African lakes are an extreme example of rapid speciation. We analyse the causes of their high speciation rates. Various studies have identified disruptive sexual selection acting on colour polymorphisms that might cause sympatric speciation. Using data on geographical distribution, colouration and relatedness from 41 species endemic to Lake Victoria, we test predictions from this hypothesis. Plotting numbers of pairs of closely related species against the amount of distributional overlap between the species reveals a bimodal distribution with modes on allopatric and sympatric. The proportion of sister species pairs that are heteromorphic for the traits under disruptive selection is higher in sympatry than in allopatry. These data support the hypothesis that disruptive sexual selection on colour polymorphisms has caused sympatric speciation and help to explain the rapid radiation of haplochromine species flocks.
Resumo:
Today's pulsed THz sources enable us to excite, probe, and coherently control the vibrational or rotational dynamics of organic and inorganic materials on ultrafast time scales. Driven by standard laser sources THz electric field strengths of up to several MVm−1 have been reported and in order to reach even higher electric field strengths the use of dedicated electric field enhancement structures has been proposed. Here, we demonstrate resonant electric field enhancement structures, which concentrate the incident electric field in sub-diffraction size volumes and show an electric field enhancement as high as ~14,000 at 50 GHz. These values have been confirmed through a combination of near-field imaging experiments and electromagnetic simulations.
Resumo:
In any physicochemical process in liquids, the dynamical response of the solvent to the solutes out of equilibrium plays a crucial role in the rates and products: the solvent molecules react to the changes in volume and electron density of the solutes to minimize the free energy of the solution, thus modulating the activation barriers and stabilizing (or destabilizing) intermediate states. In charge transfer (CT) processes in polar solvents, the response of the solvent always assists the formation of charge separation states by stabilizing the energy of the localized charges. A deep understanding of the solvation mechanisms and time scales is therefore essential for a correct description of any photochemical process in dense phase and for designing molecular devices based on photosensitizers with CT excited states. In the last two decades, with the advent of ultrafast time-resolved spectroscopies, microscopic models describing the relevant case of polar solvation (where both the solvent and the solute molecules have a permanent electric dipole and the mutual interaction is mainly dipole−dipole) have dramatically progressed. Regardless of the details of each model, they all assume that the effect of the electrostatic fields of the solvent molecules on the internal electronic dynamics of the solute are perturbative and that the solvent−solute coupling is mainly an electrostatic interaction between the constant permanent dipoles of the solute and the solvent molecules. This well-established picture has proven to quantitatively rationalize spectroscopic effects of environmental and electric dynamics (time-resolved Stokes shifts, inhomogeneous broadening, etc.). However, recent computational and experimental studies, including ours, have shown that further improvement is required. Indeed, in the last years we investigated several molecular complexes exhibiting photoexcited CT states, and we found that the current description of the formation and stabilization of CT states in an important group of molecules such as transition metal complexes is inaccurate. In particular, we proved that the solvent molecules are not just spectators of intramolecular electron density redistribution but significantly modulate it. Our results solicit further development of quantum mechanics computational methods to treat the solute and (at least) the closest solvent molecules including the nonperturbative treatment of the effects of local electrostatics and direct solvent−solute interactions to describe the dynamical changes of the solute excited states during the solvent response.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.
Resumo:
The spatial context is critical when assessing present-day climate anomalies, attributing them to potential forcings and making statements regarding their frequency and severity in a long-term perspective. Recent international initiatives have expanded the number of high-quality proxy-records and developed new statistical reconstruction methods. These advances allow more rigorous regional past temperature reconstructions and, in turn, the possibility of evaluating climate models on policy-relevant, spatio-temporal scales. Here we provide a new proxy-based, annually-resolved, spatial reconstruction of the European summer (June–August) temperature fields back to 755 CE based on Bayesian hierarchical modelling (BHM), together with estimates of the European mean temperature variation since 138 BCE based on BHM and composite-plus-scaling (CPS). Our reconstructions compare well with independent instrumental and proxy-based temperature estimates, but suggest a larger amplitude in summer temperature variability than previously reported. Both CPS and BHM reconstructions indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE. The 1st century (in BHM also the 10th century) may even have been slightly warmer than the 20th century, but the difference is not statistically significant. Comparing each 50 yr period with the 1951–2000 period reveals a similar pattern. Recent summers, however, have been unusually warm in the context of the last two millennia and there are no 30 yr periods in either reconstruction that exceed the mean average European summer temperature of the last 3 decades (1986–2015 CE). A comparison with an ensemble of climate model simulations suggests that the reconstructed European summer temperature variability over the period 850–2000 CE reflects changes in both internal variability and external forcing on multi-decadal time-scales. For pan-European temperatures we find slightly better agreement between the reconstruction and the model simulations with high-end estimates for total solar irradiance. Temperature differences between the medieval period, the recent period and the Little Ice Age are larger in the reconstructions than the simulations. This may indicate inflated variability of the reconstructions, a lack of sensitivity and processes to changes in external forcing on the simulated European climate and/or an underestimation of internal variability on centennial and longer time scales.
Resumo:
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.
Resumo:
We resolve the real-time dynamics of a purely dissipative s=1/2 quantum spin or, equivalently, hard-core boson model on a hypercubic d-dimensional lattice. The considered quantum dissipative process drives the system to a totally symmetric macroscopic superposition in each of the S3 sectors. Different characteristic time scales are identified for the dynamics and we determine their finite-size scaling. We introduce the concept of cumulative entanglement distribution to quantify multiparticle entanglement and show that the considered protocol serves as an efficient method to prepare a macroscopically entangled Bose-Einstein condensate.