977 resultados para CONVECTIVE PARAMETERIZATION
Resumo:
Liquid films, evaporating or non-evaporating, are ubiquitous in nature and technology. The dynamics of evaporating liquid films is a study applicable in several industries such as water recovery, heat exchangers, crystal growth, drug design etc. The theory describing the dynamics of liquid films crosses several fields such as engineering, mathematics, material science, biophysics and volcanology to name a few. Interfacial instabilities typically manifest by the undulation of an interface from a presumed flat state or by the onset of a secondary flow state from a primary quiescent state or both. To study the instabilities affecting liquid films, an evaporating/non-evaporating Newtonian liquid film is subject to a perturbation. Numerical analysis is conducted on configurations of such liquid films being heated on solid surfaces in order to examine the various stabilizing and destabilizing mechanisms that can cause the formation of different convective structures. These convective structures have implications towards heat transfer that occurs via this process. Certain aspects of this research topic have not received attention, as will be obvious from the literature review. Static, horizontal liquid films on solid surfaces are examined for their resistance to long wave type instabilities via linear stability analysis, method of normal modes and finite difference methods. The spatiotemporal evolution equation, available in literature, describing the time evolution of a liquid film heated on a solid surface, is utilized to analyze various stabilizing/destabilizing mechanisms affecting evaporating and non-evaporating liquid films. The impact of these mechanisms on the film stability and structure for both buoyant and non-buoyant films will be examined by the variation of mechanical and thermal boundary conditions. Films evaporating in zero gravity are studied using the evolution equation. It is found that films that are stable to long wave type instabilities in terrestrial gravity are prone to destabilization via long wave instabilities in zero gravity.
Resumo:
Die motorikwissenschaftliche Befundlage zum sogenannten „Quiet Eye“ weist darauf hin, dass hohe sportmotorische Leistungen, insbesondere in Präzisionsaufgaben, mit einer langen finalen Fixation vor der Bewegungsentfaltung einhergehen. Ein Mechanismus, der diesen Zusammenhang aus einer kognitionspsychologischen Perspektive erklären könnte, ist die Optimierung von Informationsverarbeitungsprozessen der Bewegungsparametrisierung. Diese Annahme wurde durch eine experimentelle Manipulation von Zielinstruktionen in einer Ballwurfaufgabe untersucht. Zum einen zeigen die Ergebnisse, dass sich die räumliche Verankerung des Quiet Eye in Abhängigkeit der variierten Aufgabenziele verändert; zum anderen deuten die Befunde darauf hin, dass sich Veränderungen der Verankerung im Bewegungsresultat niederschlagen. Damit wird ein kognitiver Wirkmechanismus plausibilisiert, nach dem die Bewegungsgenauigkeit durch Zielinstruktion via räumliche Quiet-Eye-Verankerung bestimmt wird.
Resumo:
Currently, the contributions of Starlette, Stella, and AJISAI are not taken into account when defining the International Terrestrial Reference Frame (ITRF), despite the large amount of data collected in a long time-span. Consequently, the SLR-derived parameters and the SLR part of the ITRF are almost exclusively defined by LAGEOS-1 and LAGEOS-2. We investigate the potential of combining the observations to several SLR satellites with different orbital characteristics. Ten years of SLR data are homogeneously processed using the development version 5.3 of the Bernese GNSS Software. Special emphasis is put on orbit parameterization and the impact of LEO data on the estimation of the geocenter coordinates, Earth rotation parameters, Earth gravity field coefficients, and the station coordinates in one common adjustment procedure. We find that the parameters derived from the multi-satellite solutions are of better quality than those obtained in single satellite solutions or solutions based on the two LAGEOS satellites. A spectral analysis of the SLR network scale w.r.t. SLRF2008 shows that artifacts related to orbit perturbations in the LAGEOS-1/2 solutions, i.e., periods related to the draconitic years of the LAGEOS satellites, are greatly reduced in the combined solutions.
Resumo:
Correct estimation of the firn lock-in depth is essential for correctly linking gas and ice chronologies in ice core studies. Here, two approaches to constrain the firn depth evolution in Antarctica are presented over the last deglaciation: outputs of a firn densification model, and measurements of δ15N of N2 in air trapped in ice core, assuming that δ15N is only affected by gravitational fractionation in the firn column. Since the firn densification process is largely governed by surface temperature and accumulation rate, we have investigated four ice cores drilled in coastal (Berkner Island, BI, and James Ross Island, JRI) and semi-coastal (TALDICE and EPICA Dronning Maud Land, EDML) Antarctic regions. Combined with available ice core air-δ15N measurements from the EPICA Dome C (EDC) site, the studied regions encompass a large range of surface accumulation rates and temperature conditions. Our δ15N profiles reveal a heterogeneous response of the firn structure to glacial–interglacial climatic changes. While firn densification simulations correctly predict TALDICE δ15N variations, they systematically fail to capture the large millennial-scale δ15N variations measured at BI and the δ15N glacial levels measured at JRI and EDML – a mismatch previously reported for central East Antarctic ice cores. New constraints of the EDML gas–ice depth offset during the Laschamp event (~41 ka) and the last deglaciation do not favour the hypothesis of a large convective zone within the firn as the explanation of the glacial firn model–δ15N data mismatch for this site. While we could not conduct an in-depth study of the influence of impurities in snow for firnification from the existing datasets, our detailed comparison between the δ15N profiles and firn model simulations under different temperature and accumulation rate scenarios suggests that the role of accumulation rate may have been underestimated in the current description of firnification models.
Resumo:
The cyclonic circulation of the Atlantic subpolar gyre is a key mechanism for North Atlantic climate variability on a wide range of time scales. It is generally accepted that it is driven by both cyclonic winds and buoyancy forcing, yet the individual importance and dynamical interactions of the two contributions remain unclear. The authors propose a simplified four-box model representing the convective basin of the Labrador Sea and its shallow and deep boundary current system, the western subpolar gyre. Convective heat loss drives a baroclinic flow of relatively light water around the dense center. Eddy salt flux from the boundary current to the center increases with a stronger circulation, favors the formation of dense waters, and thereby sustains a strong baroclinic flow, approximately 10%–25% of the total. In contrast, when the baroclinic flow is not active, surface waters may be too fresh to convect, and a buoyancy-driven circulation cannot develop. This situation corresponds to a second stable circulation mode. A hysteresis is found for variations in surface freshwater flux and the salinity of the near-surface boundary current. An analytical solution is presented and analyzed.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
The importance of soil moisture anomalies on airmass convection over semiarid regions has been recognized in several studies. The underlying mechanisms remain partly unclear. An open question is why wetter soils can result in either an increase or a decrease of precipitation (positive or negative soil moisture–precipitation feedback, respectively). Here an idealized cloud-resolving modeling framework is used to explore the local soil moisture–precipitation feedback. The approach is able to replicate both positive and negative feedback loops, depending on the environmental parameters. The mechanism relies on horizontal soil moisture variations, which may develop and intensify spontaneously. The positive expression of the feedback is associated with the initiation of convection over dry soil patches, but the convective cells then propagate over wet patches where they strengthen and preferentially precipitate. The negative feedback may occur when the wind profile is too weak to support the propagation of convective features from dry to wet areas. Precipitation is then generally weaker and falls preferentially over dry patches. The results highlight the role of the midtropospheric flow in determining the sign of the feedback. A key element of the positive feedback is the exploitation of both low convective inhibition (CIN) over dry patches (for the initiation of convection) and high CAPE over wet patches (for the generation of precipitation).
Resumo:
Thermal convection in the Antarctic and Greenland ice sheets has been dismissed on the grounds that radio-echo stratigraphy is undisturbed for long distances. However, the undisturbed stratigraphy lies, for the most part, above the density inversion in polar ice sheets and therefore does not disprove convection. An echo-free zone is widespread below the density inversion, yet nobody has cited this as a strong indication that convection is indeed present at d�pth. A generalized Rayleigh criterion for thermal convection in e1astic-viscoplastic polycrystalline solids heated from below is developed and applied to ice-sheet convection. An infinite Rayleigh number at the onset of primary creep decreases with time and becomes constant when secondary creep dominates, suggesting that any thermal buoyancy stress can initiate convection but convection cannot be sustained below a buoyancy stress of about 3 kPa. An analysis of the temperature profile down the Byrd Station core hole suggests that about 1000 m of ice below the density inversion will sustain convection. Creep along the Byrd Station strain network, radar sounding in East Antarctica, and seismic sounding in West Antarctica are examined for evidence of convective creep superimposed on advective creep. It is concluded that the evidence for convection is there, if we look for it with the intention offinding it.
Resumo:
A highly resolved Mt. Everest ice core reveals a decrease in marine and increase in continental air masses related to relatively high summer surface pressure over Mongolia, and reduction in northward incursions of the summer South Asian monsoon since similar to 1400 AD. Previously published proxy records from lower sites south of the Himalayas indicate strengthening of the monsoon since this time. These regional differences are consistent with a south north seesaw in convective activity in the Asian monsoon region, and reflect a southward shift in the mean summer position of the monsoon trough since similar to 1400 AD. The change in monsoonal circulation at 1400 AD is synchronous with a reduction in solar irradiance and the onset of the LIA. This demonstrates a hemispheric scale circulation reorganization at this time, and the potential for future large shifts in monsoonal circulation.
Resumo:
This study examines how different microphysical parameterization schemes influence orographically induced precipitation and the distributions of hydrometeors and water vapour for midlatitude summer conditions in the Weather Research and Forecasting (WRF) model. A high-resolution two-dimensional idealized simulation is used to assess the differences between the schemes in which a moist air flow is interacting with a bell-shaped 2 km high mountain. Periodic lateral boundary conditions are chosen to recirculate atmospheric water in the domain. It is found that the 13 selected microphysical schemes conserve the water in the model domain. The gain or loss of water is less than 0.81% over a simulation time interval of 61 days. The differences of the microphysical schemes in terms of the distributions of water vapour, hydrometeors and accumulated precipitation are presented and discussed. The Kessler scheme, the only scheme without ice-phase processes, shows final values of cloud liquid water 14 times greater than the other schemes. The differences among the other schemes are not as extreme, but still they differ up to 79% in water vapour, up to 10 times in hydrometeors and up to 64% in accumulated precipitation at the end of the simulation. The microphysical schemes also differ in the surface evaporation rate. The WRF single-moment 3-class scheme has the highest surface evaporation rate compensated by the highest precipitation rate. The different distributions of hydrometeors and water vapour of the microphysical schemes induce differences up to 49 W m−2 in the downwelling shortwave radiation and up to 33 W m−2 in the downwelling longwave radiation.
Resumo:
We estimate the effects of climatic changes, as predicted by six climate models, on lake surface temperatures on a global scale, using the lake surface equilibrium temperature as a proxy. We evaluate interactions between different forcing variables, the sensitivity of lake surface temperatures to these variables, as well as differences between climate zones. Lake surface equilibrium temperatures are predicted to increase by 70 to 85 % of the increase in air temperatures. On average, air temperature is the main driver for changes in lake surface temperatures, and its effect is reduced by ~10 % by changes in other meteorological variables. However, the contribution of these other variables to the variance is ~40 % of that of air temperature, and their effects can be important at specific locations. The warming increases the importance of longwave radiation and evaporation for the lake surface heat balance compared to shortwave radiation and convective heat fluxes. We discuss the consequences of our findings for the design and evaluation of different types of studies on climate change effects on lakes.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
A multi-model analysis of Atlantic multidecadal variability is performed with the following aims: to investigate the similarities to observations; to assess the strength and relative importance of the different elements of the mechanism proposed by Delworth et al. (J Clim 6:1993–2011, 1993) (hereafter D93) among coupled general circulation models (CGCMs); and to relate model differences to mean systematic error. The analysis is performed with long control simulations from ten CGCMs, with lengths ranging between 500 and 3600 years. In most models the variations of sea surface temperature (SST) averaged over North Atlantic show considerable power on multidecadal time scales, but with different periodicity. The SST variations are largest in the mid-latitude region, consistent with the short instrumental record. Despite large differences in model configurations, we find quite some consistency among the models in terms of processes. In eight of the ten models the mid-latitude SST variations are significantly correlated with fluctuations in the Atlantic meridional overturning circulation (AMOC), suggesting a link to northward heat transport changes. Consistent with this link, the three models with the weakest AMOC have the largest cold SST bias in the North Atlantic. There is no linear relationship on decadal timescales between AMOC and North Atlantic Oscillation in the models. Analysis of the key elements of the D93 mechanisms revealed the following: Most models present strong evidence that high-latitude winter mixing precede AMOC changes. However, the regions of wintertime convection differ among models. In most models salinity-induced density anomalies in the convective region tend to lead AMOC, while temperature-induced density anomalies lead AMOC only in one model. However, analysis shows that salinity may play an overly important role in most models, because of cold temperature biases in their relevant convective regions. In most models subpolar gyre variations tend to lead AMOC changes, and this relation is strong in more than half of the models.
Resumo:
The 146Sm–142Nd system plays a central role in tracing the silicate differentiation of the Earth prior to 4.1 Ga. After this time, given its initial abundance, the 146Sm can be considered to be extinct. Upadhyay et al. (2009) reported unexpected negative 142Nd anomalies in 1.48 Ga rocks of the Khariar nepheline syenite complex (India) and inferred that an early enriched, low-Sm/Nd reservoir must have contributed to the mantle source rocks of the Khariar complex. As 146Sm had been effectively extinct for about 2.6 billion years before the crystallisation of the Khariar samples, this Nd signature should have remained isolated from the convective mantle for at least that long. It was thus suggested that the source rock of Khariar samples had been sequestered in the lithospheric root of the Indian craton. Using a different chemical separation method, and a different Thermal Ionization Mass Spectrometry (TIMS) analysis protocol, the present study attempted to replicate these negative 142Nd anomalies, but none were found. To determine which data set is correct, we investigated three possible sources of bias between them: imperfect cancellation of Faraday collector efficiencies during multidynamic TIMS analysis, rapid sample fractionation between the sequential measurement of 146Nd/144Nd and 142Nd/144Nd, and non-exponential law behaviour resulting from so-called “domain mixing.” Incomplete cancellation of collector efficiencies was found unlikely to cause resolvable biases at the estimated level of variation among collector efficiencies. Even in the case of highly variable efficiency and resolvable biases, there is no reason to suspect that they would reproducibly affect only four rocks out of 10 analysed by Upadhyay et al. (2009). Although domain mixing may explain apparent “reverse” fractionation trends observed in some TIMS analyses, it cannot be the cause of the apparent negative anomalies in the study of Upadhyay et al. (2009). It was determined that rapid mass fractionation during the course of a multidynamic TIMS analysis can bias all measured Nd ratios. After applying an approximate correction for this effect, only one rock from Upadhyay et al. (2009) retained an apparent negative 142Nd anomaly. This, in conjunction with our new, anomaly-free data set measured at fractionation rates too low to cause bias, leads to the conclusion that the anomalies reported by Upadhyay et al. (2009) are a subtle and reproducible analytical artefact. The absence of negative 142Nd anomalies in these rocks relaxes the need for a mechanism (other than crust formation) that can isolate a Nd reservoir from the convective mantle for billions of years.
Resumo:
This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.