36 resultados para Generation of tsunami,


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article combines institutional and resources’ arguments to show that the institutional distance between the home and the host country, and the headquarters’ financial performance have a relevant impact on the environmental standardization decision in multinational companies. Using a sample of 135 multinational companies in three different industries with headquarters and subsidiaries based in the USA, Canada, Mexico, France, and Spain, we find that a high environmental institutional distance between headquarters’ and subsidiaries’ countries deters the standardization of environmental practices. On the other hand, high-profit headquarters are willing to standardize their environmental practices, rather than taking advantage of countries with lax environmental protection to undertake more pollution-intensive activities. Finally, we show that headquarters’ financial performance also imposes a moderating effect on the relationship between environmental institutional distance between countries and environmental standardization within the multinational company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design summer years representing near-extreme hot summers have been used in the United Kingdom for the evaluation of thermal comfort and overheating risk. The years have been selected from measured weather data basically representative of an assumed stationary climate. Recent developments have made available ‘morphed’ equivalents of these years by shifting and stretching the measured variables using change factors produced by the UKCIP02 climate projections. The release of the latest, probabilistic, climate projections of UKCP09 together with the availability of a weather generator that can produce plausible daily or hourly sequences of weather variables has opened up the opportunity for generating new design summer years which can be used in risk-based decision-making. There are many possible methods for the production of design summer years from UKCP09 output: in this article, the original concept of the design summer year is largely retained, but a number of alternative methodologies for generating the years are explored. An alternative, more robust measure of warmth (weighted cooling degree hours) is also employed. It is demonstrated that the UKCP09 weather generator is capable of producing years for the baseline period, which are comparable with those in current use. Four methodologies for the generation of future years are described, and their output related to the future (deterministic) years that are currently available. It is concluded that, in general, years produced from the UKCP09 projections are warmer than those generated previously. Practical applications: The methodologies described in this article will facilitate designers who have access to the output of the UKCP09 weather generator (WG) to generate Design Summer Year hourly files tailored to their needs. The files produced will differ according to the methodology selected, in addition to location, emissions scenario and timeslice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previously we demonstrated that heparin administration during carotid endarterectomy (CEA) caused a marked, but transient increase in platelet aggregation to arachidonic acid (AA) and adenosine diphosphate (ADP), despite effective platelet cyclo-oxygenase-1 (COX-1) inhibition with aspirin. Here we investigated the metabolism of AA via platelet 12-lipoxygenase (12-LOX) as a possible mediator of the observed transient aspirin resistance, and compared the effects of unfractionated (UFH) and low-molecular-weight (LMWH) heparin. A total of 43 aspirinated patients undergoing CEA were randomised in the trial to 5,000 IU UFH (n=22) or 2,500 IU LMWH (dalteparin, n=21). Platelet aggregation to AA (4x10⁻³) and ADP (3x10⁻⁶) was determined, and the products of the COX-1 and 12-LOX pathways; thromboxane B₂ (TXB₂) and 12-hydroxyeicosatretraenoic acid (12-HETE) were measured in plasma, and in material released from aggregating platelets.Aggregation to AA increased significantly (~10-fold) following heparinisation (p<0.0001), irrespective of heparin type (p=0.33). Significant, but smaller (~2-fold) increases in aggregation to ADP were also seen, which were significantly lower in the platelets of patients randomised to LMWH (p<0.0001). Plasma levels of TxB2 did not rise following heparinisation (p=0.93), but 12-HETE increased significantly in the patients' plasma, and released from platelets stimulated in vitro withADP, with both heparin types (p<0.0001). The magnitude of aggregation to ADP correlated with 12-HETE generation (p=0.03). Heparin administration during CEA generates AA that is metabolised to 12-HETE via the 12-LOX pathway, possibly explaining the phenomenon of transient heparin-induced platelet activation. LMWH has less effect on aggregation and 12-HETE generation than UFH when the platelets are stimulated with ADP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Noccaea caerulescens (formerly Thlaspi caerulescens) is a widely studied metal hyperaccumulator. However, molecular genetic studies are challenging in this species because of its vernal-obligate biennial life cycle of 7-9 months. Here, we describe the development of genetically stable, faster cycling lines of N. caerulescens which are nonvernal-obligate. A total of 5500 M(0) seeds from Saint Laurent Le Minier (France) were subjected to fast neutron mutagenesis. Following vernalization of young plants, 79 of plants survived to maturity. In all, 80 000 M(2) lines were screened for flowering in the absence of vernalization. Floral initials were observed in 35 lines, with nine flowering in < 12 wk. Two lines (A2 and A7) were selfed to the M(4) generation. Floral initials were observed 66 and 87 d after sowing (DAS) in A2 and A7, respectively. Silicle development occurred for all A2 and for most A7 at 92 and 123 DAS, respectively. Floral or silicle development was not observed in wild-type (WT) plants. Leaf zinc (Zn) concentration was similar in WT, A2 and A7 lines. These lines should facilitate future genetic studies of this remarkable species. Seed is publicly available through the European Arabidopsis Stock Centre (NASC).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Descent and spreading of high salinity water generated by salt rejection during sea ice formation in an Antarctic coastal polynya is studied using a hydrostatic, primitive equation three-dimensional ocean model called the Proudman Oceanographic Laboratory Coastal Ocean Modeling System (POLCOMS). The shape of the polynya is assumed to be a rectangle 100 km long and 30 km wide, and the salinity flux into the polynya at its surface is constant. The model has been run at high horizontal spatial resolution (500 m), and numerical simulations reveal a buoyancy-driven coastal current. The coastal current is a robust feature and appears in a range of simulations designed to investigate the influence of a sloping bottom, variable bottom drag, variable vertical turbulent diffusivities, higher salinity flux, and an offshore position of the polynya. It is shown that bottom drag is the main factor determining the current width. This coastal current has not been produced with other numerical models of polynyas, which may be because these models were run at coarser resolutions. The coastal current becomes unstable upstream of its front when the polynya is adjacent to the coast. When the polynya is situated offshore, an unstable current is produced from its outset owing to the capture of cyclonic eddies. The effect of a coastal protrusion and a canyon on the current motion is investigated. In particular, due to the convex shape of the coastal protrusion, the current sheds a dipolar eddy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anchored in the service-dominant logic and service innovation literature, this study investigates the drivers of employee generation of ideas for service improvement (GISI). Employee GISI focuses on customer needs and providing the exact service wanted by customers. GISI should enhance competitive advantage and organizational success (cf. Berry et al. 2006; Wang and Netemeyer 2004). Despite its importance, there is little research on the idea generation stage of the service development process (Chai, Zhang, and Tan 2005). This study contributes to the service field by providing the first empirical evaluation of the drivers of GISI. It also investigates a new explanatory determinant of reading of customer needs, namely, perceived organizational support (POS), and an outcome of POS, in the form of emotional exhaustion. Results show that the major driver of GISI is reading of customer needs by employees followed by affective organizational commitment and job satisfaction. This research provides several new and important insights for service management practice by suggesting that special care should be put into selecting and recruiting employees who have the ability to read customer needs. Additionally, organizations should invest in creating work environments that encourage and reward the flow of ideas for service improvement

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human ICT implants, such as RFID implants, cochlear implants, cardiac pacemakers, Deep Brain Stimulation, bionic limbs connected to the nervous system, and networked cognitive prostheses, are becoming increasingly complex. With ever-growing data processing functionalities in these implants, privacy and security become vital concerns. Electronic attacks on human ICT implants can cause significant harm, both to implant subjects and to their environment. This paper explores the vulnerabilities which human implants pose to crime victimisation in light of recent technological developments, and analyses how the law can deal with emerging challenges of what may well become the next generation of cybercrime: attacks targeted at technology implanted in the human body. After a state-of-the-art description of relevant types of human implants and a discussion how these implants challenge existing perceptions of the human body, we describe how various modes of attacks, such as sniffing, hacking, data interference, and denial of service, can be committed against implants. Subsequently, we analyse how these attacks can be assessed under current substantive and procedural criminal law, drawing on examples from UK and Dutch law. The possibilities and limitations of cybercrime provisions (eg, unlawful access, system interference) and bodily integrity provisions (eg, battery, assault, causing bodily harm) to deal with human-implant attacks are analysed. Based on this assessment, the paper concludes that attacks on human implants are not only a new generation in the evolution of cybercrime, but also raise fundamental questions on how criminal law conceives of attacks. Traditional distinctions between physical and non-physical modes of attack, between human bodies and things, between exterior and interior of the body need to be re-interpreted in light of developments in human implants. As the human body and technology become increasingly intertwined, cybercrime legislation and body-integrity crime legislation will also become intertwined, posing a new puzzle that legislators and practitioners will sooner or later have to solve.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We utilize energy budget diagnostics from the Coupled Model Intercomparison Project phase 5 (CMIP5) to evaluate the models' climate forcing since preindustrial times employing an established regression technique. The climate forcing evaluated this way, termed the adjusted forcing (AF), includes a rapid adjustment term associated with cloud changes and other tropospheric and land-surface changes. We estimate a 2010 total anthropogenic and natural AF from CMIP5 models of 1.9 ± 0.9 W m−2 (5–95% range). The projected AF of the Representative Concentration Pathway simulations are lower than their expected radiative forcing (RF) in 2095 but agree well with efficacy weighted forcings from integrated assessment models. The smaller AF, compared to RF, is likely due to cloud adjustment. Multimodel time series of temperature change and AF from 1850 to 2100 have large intermodel spreads throughout the period. The intermodel spread of temperature change is principally driven by forcing differences in the present day and climate feedback differences in 2095, although forcing differences are still important for model spread at 2095. We find no significant relationship between the equilibrium climate sensitivity (ECS) of a model and its 2003 AF, in contrast to that found in older models where higher ECS models generally had less forcing. Given the large present-day model spread, there is no indication of any tendency by modelling groups to adjust their aerosol forcing in order to produce observed trends. Instead, some CMIP5 models have a relatively large positive forcing and overestimate the observed temperature change.