899 resultados para Generation of 1898
Resumo:
Quantitative control of aroma generation during the Maillard reaction presents great scientific and industrial interest. Although there have been many studies conducted in simplified model systems, the results are difficult to apply to complex food systems, where the presence of other components can have a significant impact. In this work, an aqueous extract of defatted beef liver was chosen as a simplified food matrix for studying the kinetics of the Mallard reaction. Aliquots of the extract were heated under different time and temperature conditions and analyzed for sugars, amino acids, and methylbutanals, which are important Maillard-derived aroma compounds formed in cooked meat. Multiresponse kinetic modeling, based on a simplified mechanistic pathway, gave a good fit with the experimental data, but only when additional steps were introduced to take into account the interactions of glucose and glucose-derived intermediates with protein and other amino compounds. This emphasizes the significant role of the food matrix in controlling the Maillard reaction.
Resumo:
Waves with periods shorter than the inertial period exist in the atmosphere (as inertia-gravity waves) and in the oceans (as Poincaré and internal gravity waves). Such waves owe their origin to various mechanisms, but of particular interest are those arising either from local secondary instabilities or spontaneous emission due to loss of balance. These phenomena have been studied in the laboratory, both in the mechanically-forced and the thermally-forced rotating annulus. Their generation mechanisms, especially in the latter system, have not yet been fully understood, however. Here we examine short period waves in a numerical model of the rotating thermal annulus, and show how the results are consistent with those from earlier laboratory experiments. We then show how these waves are consistent with being inertia-gravity waves generated by a localised instability within the thermal boundary layer, the location of which is determined by regions of strong shear and downwelling at certain points within a large-scale baroclinic wave flow. The resulting instability launches small-scale inertia-gravity waves into the geostrophic interior of the flow. Their behaviour is captured in fully nonlinear numerical simulations in a finite-difference, 3D Boussinesq Navier-Stokes model. Such a mechanism has many similarities with those responsible for launching small- and meso-scale inertia-gravity waves in the atmosphere from fronts and local convection.
Resumo:
A new mild method has been devised for generating o-(naphtho)quinone methides via fluoride-induced desilylation of silyl derivatives of o-hydroxybenzyl(or 1-naphthylmethyl) nitrate. The reactive o-(naphtho)quinone methide intermediates were trapped by C, O, N and S nucleophiles and underwent “inverse electron-demand” hetero Diels- Alder reaction with dienophiles to give stable adducts. The method has useful potential application in natural product synthesis and drug research
Resumo:
Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.
Resumo:
This article combines institutional and resources’ arguments to show that the institutional distance between the home and the host country, and the headquarters’ financial performance have a relevant impact on the environmental standardization decision in multinational companies. Using a sample of 135 multinational companies in three different industries with headquarters and subsidiaries based in the USA, Canada, Mexico, France, and Spain, we find that a high environmental institutional distance between headquarters’ and subsidiaries’ countries deters the standardization of environmental practices. On the other hand, high-profit headquarters are willing to standardize their environmental practices, rather than taking advantage of countries with lax environmental protection to undertake more pollution-intensive activities. Finally, we show that headquarters’ financial performance also imposes a moderating effect on the relationship between environmental institutional distance between countries and environmental standardization within the multinational company.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Design summer years representing near-extreme hot summers have been used in the United Kingdom for the evaluation of thermal comfort and overheating risk. The years have been selected from measured weather data basically representative of an assumed stationary climate. Recent developments have made available ‘morphed’ equivalents of these years by shifting and stretching the measured variables using change factors produced by the UKCIP02 climate projections. The release of the latest, probabilistic, climate projections of UKCP09 together with the availability of a weather generator that can produce plausible daily or hourly sequences of weather variables has opened up the opportunity for generating new design summer years which can be used in risk-based decision-making. There are many possible methods for the production of design summer years from UKCP09 output: in this article, the original concept of the design summer year is largely retained, but a number of alternative methodologies for generating the years are explored. An alternative, more robust measure of warmth (weighted cooling degree hours) is also employed. It is demonstrated that the UKCP09 weather generator is capable of producing years for the baseline period, which are comparable with those in current use. Four methodologies for the generation of future years are described, and their output related to the future (deterministic) years that are currently available. It is concluded that, in general, years produced from the UKCP09 projections are warmer than those generated previously. Practical applications: The methodologies described in this article will facilitate designers who have access to the output of the UKCP09 weather generator (WG) to generate Design Summer Year hourly files tailored to their needs. The files produced will differ according to the methodology selected, in addition to location, emissions scenario and timeslice.
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Lost in flatlands: will the next generation of page layout programs give us back our sense of space?
Resumo:
Previously we demonstrated that heparin administration during carotid endarterectomy (CEA) caused a marked, but transient increase in platelet aggregation to arachidonic acid (AA) and adenosine diphosphate (ADP), despite effective platelet cyclo-oxygenase-1 (COX-1) inhibition with aspirin. Here we investigated the metabolism of AA via platelet 12-lipoxygenase (12-LOX) as a possible mediator of the observed transient aspirin resistance, and compared the effects of unfractionated (UFH) and low-molecular-weight (LMWH) heparin. A total of 43 aspirinated patients undergoing CEA were randomised in the trial to 5,000 IU UFH (n=22) or 2,500 IU LMWH (dalteparin, n=21). Platelet aggregation to AA (4x10⁻³) and ADP (3x10⁻⁶) was determined, and the products of the COX-1 and 12-LOX pathways; thromboxane B₂ (TXB₂) and 12-hydroxyeicosatretraenoic acid (12-HETE) were measured in plasma, and in material released from aggregating platelets.Aggregation to AA increased significantly (~10-fold) following heparinisation (p<0.0001), irrespective of heparin type (p=0.33). Significant, but smaller (~2-fold) increases in aggregation to ADP were also seen, which were significantly lower in the platelets of patients randomised to LMWH (p<0.0001). Plasma levels of TxB2 did not rise following heparinisation (p=0.93), but 12-HETE increased significantly in the patients' plasma, and released from platelets stimulated in vitro withADP, with both heparin types (p<0.0001). The magnitude of aggregation to ADP correlated with 12-HETE generation (p=0.03). Heparin administration during CEA generates AA that is metabolised to 12-HETE via the 12-LOX pathway, possibly explaining the phenomenon of transient heparin-induced platelet activation. LMWH has less effect on aggregation and 12-HETE generation than UFH when the platelets are stimulated with ADP.