968 resultados para Neutral point potential balancing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fixed dynamical heating model is used to investigate the pattern of zonal-mean stratospheric temperature change resulting from geoengineering with aerosols composed of sulfate, titania, limestone and soot. Aerosol always heats the tropical lower stratosphere, but at the poles the response can be either heating, cooling, or neutral. The sign of the change in stratospheric Pole-Equator temperature difference depends on aerosol type, size and season. This has implications for modelling geoengineering impacts and the response of the stratospheric circulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dispersion of a point-source release of a passive scalar in a regular array of cubical, urban-like, obstacles is investigated by means of direct numerical simulations. The simulations are conducted under conditions of neutral stability and fully rough turbulent flow, at a roughness Reynolds number of Reτ = 500. The Navier–Stokes and scalar equations are integrated assuming a constant rate release from a point source close to the ground within the array. We focus on short-range dispersion, when most of the material is still within the building canopy. Mean and fluctuating concentrations are computed for three different pressure gradient directions (0◦ , 30◦ , 45◦). The results agree well with available experimental data measured in a water channel for a flow angle of 0◦ . Profiles of mean concentration and the three-dimensional structure of the dispersion pattern are compared for the different forcing angles. A number of processes affecting the plume structure are identified and discussed, including: (i) advection or channelling of scalar down ‘streets’, (ii) lateral dispersion by turbulent fluctuations and topological dispersion induced by dividing streamlines around buildings, (iii) skewing of the plume due to flow turning with height, (iv) detrainment by turbulent dispersion or mean recirculation, (v) entrainment and release of scalar in building wakes, giving rise to ‘secondary sources’, (vi) plume meandering due to unsteady turbulent fluctuations. Finally, results on relative concentration fluctuations are presented and compared with the literature for point source dispersion over flat terrain and urban arrays. Keywords Direct numerical simulation · Dispersion modelling · Urban array

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To develop a list of prescribing indicators specific for the hospital setting that would facilitate the prospective collection of high severity and/or high frequency prescribing errors, which are also amenable to electronic clinical decision support (CDS). Method: A three-stage consensus technique (electronic Delphi) was carried out with 20 expert pharmacists and physicians across England. Participants were asked to score prescribing errors using a 5-point Likert scale for their likelihood of occurrence and the severity of the most likely outcome. These were combined to produce risk scores, from which median scores were calculated for each indicator across the participants in the study. The degree of consensus between the participants was defined as the proportion that gave a risk score in the same category as the median. Indicators were included if a consensus of 80% or more was achieved. Results: A total of 80 prescribing errors were identified by consensus as being high or extreme risk. The most common drug classes named within the indicators were antibiotics (n=13), antidepressants (n=8), nonsteroidal anti-inflammatory drugs (n=6), and opioid analgesics (n=6).The most frequent error type identified as high or extreme risk were those classified as clinical contraindications (n=29/80). Conclusion: 80 high risk prescribing errors in the hospital setting have been identified by an expert panel. These indicators can serve as the basis for a standardised, validated tool for the collection of data in both paperbased and electronic prescribing processes, as well as to assess the impact of electronic decision support implementation or development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glycogen synthase kinase 3 (GSK3, of which there are two isoforms, GSK3alpha and GSK3beta) was originally characterized in the context of regulation of glycogen metabolism, though it is now known to regulate many other cellular processes. Phosphorylation of GSK3alpha(Ser21) and GSK3beta(Ser9) inhibits their activity. In the heart, emphasis has been placed particularly on GSK3beta, rather than GSK3alpha. Importantly, catalytically-active GSK3 generally restrains gene expression and, in the heart, catalytically-active GSK3 has been implicated in anti-hypertrophic signalling. Inhibition of GSK3 results in changes in the activities of transcription and translation factors in the heart and promotes hypertrophic responses, and it is generally assumed that signal transduction from hypertrophic stimuli to GSK3 passes primarily through protein kinase B/Akt (PKB/Akt). However, recent data suggest that the situation is far more complex. We review evidence pertaining to the role of GSK3 in the myocardium and discuss effects of genetic manipulation of GSK3 activity in vivo. We also discuss the signalling pathways potentially regulating GSK3 activity and propose that, depending on the stimulus, phosphorylation of GSK3 is independent of PKB/Akt. Potential GSK3 substrates studied in relation to myocardial hypertrophy include nuclear factors of activated T cells, beta-catenin, GATA4, myocardin, CREB, and eukaryotic initiation factor 2Bvarepsilon. These and other transcription factor substrates putatively important in the heart are considered. We discuss whether cardiac pathologies could be treated by therapeutic intervention at the GSK3 level but conclude that any intervention would be premature without greater understanding of the precise role of GSK3 in cardiac processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the concept of available potential energy (APE) density is extended to a multicomponent Boussinesq fluid with a nonlinear equation of state. As shown by previous studies, the APE density is naturally interpreted as the work against buoyancy forces that a parcel needs to perform to move from a notional reference position at which its buoyancy vanishes to its actual position; because buoyancy can be defined relative to an arbitrary reference state, so can APE density. The concept of APE density is therefore best viewed as defining a class of locally defined energy quantities, each tied to a different reference state, rather than as a single energy variable. An important result, for which a new proof is given, is that the volume integrated APE density always exceeds Lorenz’s globally defined APE, except when the reference state coincides with Lorenz’s adiabatically re-arranged reference state of minimum potential energy. A parcel reference position is systematically defined as a level of neutral buoyancy (LNB): depending on the nature of the fluid and on how the reference state is defined, a parcel may have one, none, or multiple LNB within the fluid. Multiple LNB are only possible for a multicomponent fluid whose density depends on pressure. When no LNB exists within the fluid, a parcel reference position is assigned at the minimum or maximum geopotential height. The class of APE densities thus defined admits local and global balance equations, which all exhibit a conversion with kinetic energy, a production term by boundary buoyancy fluxes, and a dissipation term by internal diffusive effects. Different reference states alter the partition between APE production and dissipation, but neither affect the net conversion between kinetic energy and APE, nor the difference between APE production and dissipation. We argue that the possibility of constructing APE-like budgets based on reference states other than Lorenz’s reference state is more important than has been previously assumed, and we illustrate the feasibility of doing so in the context of an idealised and realistic oceanic example, using as reference states one with constant density and another one defined as the horizontal mean density field; in the latter case, the resulting APE density is found to be a reasonable approximation of the APE density constructed from Lorenz’s reference state, while being computationally cheaper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently UK fruit and vegetable intakes are below recommendations. Bread is a staple food consumed by ~95% of adults in western countries. In addition, bread provides an ideal matrix by which functionality can be delivered to the consumer in an accepted food. Therefore, enriching bread with vegetables may be an effective strategy to increase vegetable consumption. This study evaluated consumer acceptance, purchase intent and intention of product replacement of bread enriched with red beetroot, carrot with coriander, red pepper with tomato or white beetroot (80g vegetable per serving of 200g) compared to white control bread (0g vegetable). Consumers (n=120) rated their liking of the breads overall, as well as their liking of appearance, flavour and texture using nine-point hedonic scales. Product replacement and purchase intent of the breads was rated using five-point scales. The effect of providing consumers with health information about the breads was also evaluated. There were significant differences in overall liking (P<0.0001), as well as liking of appearance (P<0.0001), flavour (P=0.0002) and texture (P=0.04), between the breads. However, the significant differences resulted from the red beetroot bread which was significantly (P<0.05) less liked compared to control bread. There were no significant differences in overall liking between any of the other vegetable-enriched breads compared with the control bread (no vegetable inclusion), apart from the red beetroot bread which was significantly less liked. The provision of health information about the breads did not increase consumer liking of the vegetable-enriched breads. In conclusion, this study demonstrated that vegetable-enriched bread appeared to be an acceptable strategy to increase vegetable intake, however, liking depended on vegetable type.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the potential contribution of observed changes in lower stratospheric water vapour to stratospheric temperature variations over the past three decades using a comprehensive global climate model (GCM). Three case studies are considered. In the first, the net increase in stratospheric water vapour (SWV) from 1980–2010 (derived from the Boulder frost-point hygrometer record using the gross assumption that this is globally representative) is estimated to have cooled the lower stratosphere by up to ∼0.2 K decade−1 in the global and annual mean; this is ∼40% of the observed cooling trend over this period. In the Arctic winter stratosphere there is a dynamical response to the increase in SWV, with enhanced polar cooling of 0.6 K decade−1 at 50 hPa and warming of 0.5 K decade−1 at 1 hPa. In the second case study, the observed decrease in tropical lower stratospheric water vapour after the year 2000 (imposed in the GCM as a simplified representation of the observed changes derived from satellite data) is estimated to have caused a relative increase in tropical lower stratospheric temperatures by ∼0.3 K at 50 hPa. In the third case study, the wintertime dehydration in the Antarctic stratospheric polar vortex (again using a simplified representation of the changes seen in a satellite dataset) is estimated to cause a relative warming of the Southern Hemisphere polar stratosphere by up to 1 K at 100 hPa from July–October. This is accompanied by a weakening of the westerly winds on the poleward flank of the stratospheric jet by up to 1.5 m s−1 in the GCM. The results show that, if the measurements are representative of global variations, SWV should be considered as important a driver of transient and long-term variations in lower stratospheric temperature over the past 30 years as increases in long-lived greenhouse gases and stratospheric ozone depletion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study compares the impact of thermal and high pressure high temperature(HPHT) processing on volatile profile (via a non-targeted headspace fingerprinting) and structural and nutritional quality parameter (via targeted approaches) of orange and yellow carrot purees. The effect of oil enrichment was also considered. Since oil enrichment affects compounds volatility, the effect of oil was not studied when comparing the volatile fraction. For the targeted part, as yellow carrot purees were shown to contain a very low amount of carotenoids, focus was given to orange carrot purees. The results of the non-targeted approach demonstrated HPHT processing exerts a distinct effect on the volatile fractions compared to thermal processing. In addition, different colored carrot varieties are characterized by distinct headspace fingerprints. From a structural point of view, limited or no difference could be observed between orange carrot purees treated with HPHT or HT processes, both for samples without and with oil. From nutritional point of view, only in samples with oil, significant isomerisation of all-trans-β-carotene occurred due to both processing. Overall, for this type of product and for the selected conditions, HPHT processing seems to have a different impact on the volatile profile but rather similar impact on the structural and nutritional attributes compared to thermal processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ice supersaturation (ISS) in the upper troposphere and lower stratosphere is important for the formation of cirrus clouds and long-lived contrails. Cold ISS (CISS) regions (taken here to be ice-supersaturated regions with temperature below 233 K) are most relevant for contrail formation.We analyse projected changes to the 250 hPa distribution and frequency of CISS regions over the 21st century using data from the Representative Concentration Pathway 8.5 simulations for a selection of Coupled Model Intercomparison Project Phase 5 models. The models show a global-mean, annual-mean decrease in CISS frequency by about one-third, from 11 to 7% by the end of the 21st century, relative to the present-day period 1979–2005. Changes are analysed in further detail for three subregions where air traffic is already high and increasing (Northern Hemisphere mid-latitudes) or expected to increase (tropics and Northern Hemisphere polar regions). The largest change is seen in the tropics, where a reduction of around 9 percentage points in CISS frequency by the end of the century is driven by the strong warming of the upper troposphere. In the Northern Hemisphere mid-latitudes the multi-model-mean change is an increase in CISS frequency of 1 percentage point; however the sign of the change is dependent not only on the model but also on latitude and season. In the Northern Hemisphere polar regions there is an increase in CISS frequency of 5 percentage points in the annual mean. These results suggest that, over the 21st century, climate change may have large impacts on the potential for contrail formation; actual changes to contrail cover will also depend on changes to the volume of air traffic, aircraft technology and flight routing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statement that pairs of individuals from different populations are often more genetically similar than pairs from the same population is a widespread idea inside and outside the scientific community. Witherspoon et al. [""Genetic similarities within and between human populations,"" Genetics 176:351-359 (2007)] proposed an index called the dissimilarity fraction (omega) to access in a quantitative way the validity of this statement for genetic systems. Witherspoon demonstrated that, as the number of loci increases, omega decreases to a point where, when enough sampling is available, the statement is false. In this study, we applied the dissimilarity fraction to Howells`s craniometric database to establish whether or not similar results are obtained for cranial morphological traits. Although in genetic studies thousands of loci are available, Howells`s database provides no more than 55 metric traits, making the contribution of each variable important. To cope with this limitation, we developed a routine that takes this effect into consideration when calculating. omega Contrary to what was observed for the genetic data, our results show that cranial morphology asymptotically approaches a mean omega of 0.3 and therefore supports the initial statement-that is, that individuals from the same geographic region do not form clear and discrete clusters-further questioning the idea of the existence of discrete biological clusters in the human species. Finally, by assuming that cranial morphology is under an additive polygenetic model, we can say that the population history signal of human craniometric traits presents the same resolution as a neutral genetic system dependent on no more than 20 loci.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acanthamoeba spp., known to cause keratitis and granulomatous encephalitis in humans, are frequently isolated from a variety of water sources. Here we report for the first time the characterization of an Acanthamoeba sp. (ACC01) isolated from tap water in Brazil. This organism is currently being maintained in an axenic growth medium. Phylogenetic analysis based on SSU rRNA gene sequences positioned the new isolate in genotype T4, closest to the keratitis-causing isolate, A. polyphaga ATCC 30461 (similar to 99% similarity). Acanthamoeba ACC01 and A. polyphaga 30461 both grew at 37 degrees C and were osmotically resistant, multiplying in hyperosmolar medium. Both isolates secreted comparable amounts of proteolytic enzymes, including serine peptidases that were optimally active at a near neutral/alkaline pH and resolved identically in gelatin gels. Incubation of gels at pH 4.0 with 2 mM DTT also indicated the secretion of similar cysteine peptidases. Altogether, the results point to the pathogenic potential of Acanthamoeba ACC01. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

we study the one-loop quantum corrections for higher-derivative superfield theories, generalizing the approach for calculating the superfield effective potential. In particular, we calculate the effective potential for two versions of higher-derivative chiral superfield models. We point out that the equivalence of the higher-derivative theory for the chiral superfield and the one without higher derivatives but with an extended number of chiral superfields occurs only when the mass term is contained in the general Lagrangian. The presence of divergences can be taken as an indication of that equivalence. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A continuous version of the hierarchical spherical model at dimension d=4 is investigated. Two limit distributions of the block spin variable X(gamma), normalized with exponents gamma = d + 2 and gamma=d at and above the critical temperature, are established. These results are proven by solving certain evolution equations corresponding to the renormalization group (RG) transformation of the O(N) hierarchical spin model of block size L(d) in the limit L down arrow 1 and N ->infinity. Starting far away from the stationary Gaussian fixed point the trajectories of these dynamical system pass through two different regimes with distinguishable crossover behavior. An interpretation of this trajectories is given by the geometric theory of functions which describe precisely the motion of the Lee-Yang zeroes. The large-N limit of RG transformation with L(d) fixed equal to 2, at the criticality, has recently been investigated in both weak and strong (coupling) regimes by Watanabe (J. Stat. Phys. 115:1669-1713, 2004) . Although our analysis deals only with N = infinity case, it complements various aspects of that work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermoluminescence (TL) characteristics of quartz are highly dependent of its thermal history. Based on the enhancement of quartz luminescence occurred after heating, some authors proposed to use quartz TL to recover thermal events that affected quartz crystals. However, little is know about the influence of the temperature of quartz crystallization on its TL characteristics. In the present study, we evaluate the TL sensitivity and dose response curves of hydrothermal and metamorphic quartz with crystallization temperatures from 209 +/- 15 to 633 +/- 27 degrees C determined through fluid inclusion and mineral chemistry analysis. The studied crystals present a cooling thermal history, which allow the acquiring of their natural TL without influence of heating after crystallization. The TL curves of the studied samples present two main components formed by different peaks overlapped around 110 C and 200-400 degrees C. The TL sensitivity in the 200-400 degrees C region increases linearly with the temperature of quartz crystallization. No relationship was observed between temperatures of quartz crystallization and saturation doses (<100 Gy). The elevated TL sensitivity of the high temperature quartz is attributed to the control exerted by the temperature of crystallization on the substitution of Si(4+) by ions such as Al(3+) and Ti(4+), which produce defects responsible for luminescence phenomena. The linear relationship observed between TL in the 200-400 degrees C region and crystallization temperature has potential use as a quartz geothermometer. The relative abundance of quartz in the earth crust and the easiness to measure TL are advantageous in relation to geothermometry methods based on chemistry of other minerals. (C) 2010 Elsevier Ltd. All rights reserved.