918 resultados para HTML-element
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
The principles of operation of an experimental prototype instrument known as J-SCAN are described along with the derivation of formulae for the rapid calculation of normalized impedances; the structure of the instrument; relevant probe design parameters; digital quantization errors; and approaches for the optimization of single frequency operation. An eddy current probe is used As the inductance element of a passive tuned-circuit which is repeatedly excited with short impulses. Each impulse excites an oscillation which is subject to decay dependent upon the values of the tuned-circuit components: resistance, inductance and capacitance. Changing conditions under the probe that affect the resistance and inductance of this circuit will thus be detected through changes in the transient response. These changes in transient response, oscillation frequency and rate of decay, are digitized, and then normalized values for probe resistance and inductance changes are calculated immediately in a micro processor. This approach coupled with a minimum analogue processing and maximum of digital processing has advantages compared with the conventional approaches to eddy current instruments. In particular there are: the absence of an out of balance condition and the flexibility and stability of digital data processing.
Resumo:
Eddy current testing by current deflection detects surface cracks and geometric features by sensing the re-routing of currents. Currents are diverted by cracks in two ways: down the walls, and along their length at the surface. Current deflection utilises the latter currents, detecting them via their tangential magnetic field. Results from 3-D finite element computer modelling, which show the two forms of deflection, are presented. Further results indicate that the current deflection technique is suitable for the detection of surface cracks in smooth materials with varying material properties.
Resumo:
The development of shallow cellular convection in warm orographic clouds is investigated through idealized numerical simulations of moist flow over topography using a cloud-resolving numerical model. Buoyant instability, a necessary element for moist convection, is found to be diagnosed most accurately through analysis of the moist Brunt–Väisälä frequency (N_m) rather than the vertical profile of θ_e. In statically unstable orographic clouds (N_m^2) < 0), additional environmental and terrain-related factors are shown to have major effects on the amount of cellularity that occurs in 2D simulations. One of these factors, the basic-state wind shear, may suppress convection in 2D yet allow for longitudinal convective roll circulations in 3D. The presence of convective structures within an orographic cloud substantially enhanced the maximum rainfall rates, precipitation efficiencies, and precipitation accumulations in all simulations.
Resumo:
Lava domes comprise core, carapace, and clastic talus components. They can grow endogenously by inflation of a core and/or exogenously with the extrusion of shear bounded lobes and whaleback lobes at the surface. Internal structure is paramount in determining the extent to which lava dome growth evolves stably, or conversely the propensity for collapse. The more core lava that exists within a dome, in both relative and absolute terms, the more explosive energy is available, both for large pyroclastic flows following collapse and in particular for lateral blast events following very rapid removal of lateral support to the dome. Knowledge of the location of the core lava within the dome is also relevant for hazard assessment purposes. A spreading toe, or lobe of core lava, over a talus substrate may be both relatively unstable and likely to accelerate to more violent activity during the early phases of a retrogressive collapse. Soufrière Hills Volcano, Montserrat has been erupting since 1995 and has produced numerous lava domes that have undergone repeated collapse events. We consider one continuous dome growth period, from August 2005 to May 2006 that resulted in a dome collapse event on 20th May 2006. The collapse event lasted 3 h, removing the whole dome plus dome remnants from a previous growth period in an unusually violent and rapid collapse event. We use an axisymmetrical computational Finite Element Method model for the growth and evolution of a lava dome. Our model comprises evolving core, carapace and talus components based on axisymmetrical endogenous dome growth, which permits us to model the interface between talus and core. Despite explicitly only modelling axisymmetrical endogenous dome growth our core–talus model simulates many of the observed growth characteristics of the 2005–2006 SHV lava dome well. Further, it is possible for our simulations to replicate large-scale exogenous characteristics when a considerable volume of talus has accumulated around the lower flanks of the dome. Model results suggest that dome core can override talus within a growing dome, potentially generating a region of significant weakness and a potential locus for collapse initiation.
Resumo:
During many lava dome-forming eruptions, persistent rockfalls and the concurrent development of a substantial talus apron around the foot of the dome are important aspects of the observed activity. An improved understanding of internal dome structure, including the shape and internal boundaries of the talus apron, is critical for determining when a lava dome is poised for a major collapse and how this collapse might ensue. We consider a period of lava dome growth at the Soufrière Hills Volcano, Montserrat, from August 2005 to May 2006, during which a 100 × 106 m3 lava dome developed that culminated in a major dome-collapse event on 20 May 2006. We use an axi-symmetrical Finite Element Method model to simulate the growth and evolution of the lava dome, including the development of the talus apron. We first test the generic behaviour of this continuum model, which has core lava and carapace/talus components. Our model describes the generation rate of talus, including its spatial and temporal variation, as well as its post-generation deformation, which is important for an improved understanding of the internal configuration and structure of the dome. We then use our model to simulate the 2005 to 2006 Soufrière Hills dome growth using measured dome volumes and extrusion rates to drive the model and generate the evolving configuration of the dome core and carapace/talus domains. The evolution of the model is compared with the observed rockfall seismicity using event counts and seismic energy parameters, which are used here as a measure of rockfall intensity and hence a first-order proxy for volumes. The range of model-derived volume increments of talus aggraded to the talus slope per recorded rockfall event, approximately 3 × 103–13 × 103 m3 per rockfall, is high with respect to estimates based on observed events. From this, it is inferred that some of the volumetric growth of the talus apron (perhaps up to 60–70%) might have occurred in the form of aseismic deformation of the talus, forced by an internal, laterally spreading core. Talus apron growth by this mechanism has not previously been identified, and this suggests that the core, hosting hot gas-rich lava, could have a greater lateral extent than previously considered.
Resumo:
Ab initio calculations of the energy have been made at approximately 150 points on the two lowest singlet A' potential energy surfaces of the water molecule, 1A' and 1A', covering structures having D∞h, C∞v, C2v and Cs symmetries. The object was to obtain an ab initio surface of uniform accuracy over the whole three-dimensional coordinate space. Molecular orbitals were constructed from a double zeta plus Rydberg basis, and correlation was introduced by single and double excitations from multiconfiguration states which gave the correct dissociation behaviour. A two-valued analytical potential function has been constructed to fit these ab initio energy calculations. The adiabatic energies are given in our analytical function as the eigenvalues of a 2 2 matrix, whose diagonal elements define two diabatic surfaces. The off-diagonal element goes to zero for those configurations corresponding to surface intersections, so that our adiabatic surface exhibits the correct Σ/II conical intersections for linear configurations, and singlet/triplet intersections of the O + H2 dissociation fragments. The agreement between our analytical surface and experiment has been improved by using empirical diatomic potential curves in place of those derived from ab initio calculations.
Resumo:
Preface. Iron is considered to be a minor element employed, in a variety of forms, by nearly all living organisms. In some cases, it is utilised in large quantities, for instance for the formation of magnetosomes within magnetotactic bacteria or during use of iron as a respiratory donor or acceptor by iron oxidising or reducing bacteria. However, in most cases the role of iron is restricted to its use as a cofactor or prosthetic group assisting the biological activity of many different types of protein. The key metabolic processes that are dependent on iron as a cofactor are numerous; they include respiration, light harvesting, nitrogen fixation, the Krebs cycle, redox stress resistance, amino acid synthesis and oxygen transport. Indeed, it is clear that Life in its current form would be impossible in the absence of iron. One of the main reasons for the reliance of Life upon this metal is the ability of iron to exist in multiple redox states, in particular the relatively stable ferrous (Fe2+) and ferric (Fe3+) forms. The availability of these stable oxidation states allows iron to engage in redox reactions over a wide range of midpoint potentials, depending on the coordination environment, making it an extremely adaptable mediator of electron exchange processes. Iron is also one of the most common elements within the Earth’s crust (5% abundance) and thus is considered to have been readily available when Life evolved on our early, anaerobic planet. However, as oxygen accumulated (the ‘Great oxidation event’) within the atmosphere some 2.4 billion years ago, and as the oceans became less acidic, the iron within primordial oceans was converted from its soluble reduced form to its weakly-soluble oxidised ferric form, which precipitated (~1.8 billion years ago) to form the ‘banded iron formations’ (BIFs) observed today in Precambrian sedimentary rocks around the world. These BIFs provide a geological record marking a transition point away from the ancient anaerobic world towards modern aerobic Earth. They also indicate a period over which the bio-availability of iron shifted from abundance to limitation, a condition that extends to the modern day. Thus, it is considered likely that the vast majority of extant organisms face the common problem of securing sufficient iron from their environment – a problem that Life on Earth has had to cope with for some 2 billion years. This struggle for iron is exemplified by the competition for this metal amongst co-habiting microorganisms who resort to stealing (pirating) each others iron supplies! The reliance of micro-organisms upon iron can be disadvantageous to them, and to our innate immune system it represents a chink in the microbial armour, offering an opportunity that can be exploited to ward off pathogenic invaders. In order to infect body tissues and cause disease, pathogens must secure all their iron from the host. To fight such infections, the host specifically withdraws available iron through the action of various iron depleting processes (e.g. the release of lactoferrin and lipocalin-2) – this represents an important strategy in our defence against disease. However, pathogens are frequently able to deploy iron acquisition systems that target host iron sources such as transferrin, lactoferrin and hemoproteins, and thus counteract the iron-withdrawal approaches of the host. Inactivation of such host-targeting iron-uptake systems often attenuates the pathogenicity of the invading microbe, illustrating the importance of ‘the battle for iron’ in the infection process. The role of iron sequestration systems in facilitating microbial infections has been a major driving force in research aimed at unravelling the complexities of microbial iron transport processes. But also, the intricacy of such systems offers a challenge that stimulates the curiosity. One such challenge is to understand how balanced levels of free iron within the cytosol are achieved in a way that avoids toxicity whilst providing sufficient levels for metabolic purposes – this is a requirement that all organisms have to meet. Although the systems involved in achieving this balance can be highly variable amongst different microorganisms, the overall strategy is common. On a coarse level, the homeostatic control of cellular iron is maintained through strict control of the uptake, storage and utilisation of available iron, and is co-ordinated by integrated iron-regulatory networks. However, much yet remains to be discovered concerning the fine details of these different iron regulatory processes. As already indicated, perhaps the most difficult task in maintaining iron homeostasis is simply the procurement of sufficient iron from external sources. The importance of this problem is demonstrated by the plethora of distinct iron transporters often found within a single bacterium, each targeting different forms (complex or redox state) of iron or a different environmental condition. Thus, microbes devote considerable cellular resource to securing iron from their surroundings, reflecting how successful acquisition of iron can be crucial in the competition for survival. The aim of this book is provide the reader with an overview of iron transport processes within a range of microorganisms and to provide an indication of how microbial iron levels are controlled. This aim is promoted through the inclusion of expert reviews on several well studied examples that illustrate the current state of play concerning our comprehension of how iron is translocated into the bacterial (or fungal) cell and how iron homeostasis is controlled within microbes. The first two chapters (1-2) consider the general properties of microbial iron-chelating compounds (known as ‘siderophores’), and the mechanisms used by bacteria to acquire haem and utilise it as an iron source. The following twelve chapters (3-14) focus on specific types of microorganism that are of key interest, covering both an array of pathogens for humans, animals and plants (e.g. species of Bordetella, Shigella, , Erwinia, Vibrio, Aeromonas, Francisella, Campylobacter and Staphylococci, and EHEC) as well as a number of prominent non-pathogens (e.g. the rhizobia, E. coli K-12, Bacteroides spp., cyanobacteria, Bacillus spp. and yeasts). The chapters relay the common themes in microbial iron uptake approaches (e.g. the use of siderophores, TonB-dependent transporters, and ABC transport systems), but also highlight many distinctions (such as use of different types iron regulator and the impact of the presence/absence of a cell wall) in the strategies employed. We hope that those both within and outside the field will find this book useful, stimulating and interesting. We intend that it will provide a source for reference that will assist relevant researchers and provide an entry point for those initiating their studies within this subject. Finally, it is important that we acknowledge and thank wholeheartedly the many contributors who have provided the 14 excellent chapters from which this book is composed. Without their considerable efforts, this book, and the understanding that it relays, would not have been possible. Simon C Andrews and Pierre Cornelis
Resumo:
Planning is a vital element of project management but it is still not recognized as a process variable. Its objective should be to outperform the initially defined processes, and foresee and overcome possible undesirable events. Detailed task-level master planning is unrealistic since one cannot accurately predict all the requirements and obstacles before work has even started. The process planning methodology (PPM) has thus been developed in order to overcome common problems of the overwhelming project complexity. The essential elements of the PPM are the process planning group (PPG), including a control team that dynamically links the production/site and management, and the planning algorithm embodied within two continuous-improvement loops. The methodology was tested on a factory project in Slovenia and in four successive projects of a similar nature. In addition to a number of improvement ideas and enhanced communication, the applied PPM resulted in 32% higher total productivity, 6% total savings and created a synergistic project environment.
Resumo:
Construction materials and equipment are essential building blocks of every construction project and may account for 50-60 per cent of the total cost of construction. The rate of their utilization, on the other hand, is the element that most directly relates to a project progress. A growing concern in the industry that inadequate efficiency hinders its success could thus be accommodated by turning construction into a logistic process. Although mostly limited, recent attempts and studies show that Radio Frequency IDentification (RFID) applications have significant potentials in construction. However, the aim of this research is to show that the technology itself should not only be used for automation and tracking to overcome the supply chain complexity but also as a tool to generate, record and exchange process-related knowledge among the supply chain stakeholders. This would enable all involved parties to identify and understand consequences of any forthcoming difficulties and react accordingly before they cause major disruptions in the construction process. In order to achieve this aim the study focuses on a number of methods. First of all it develops a generic understanding of how RFID technology has been used in logistic processes in industrial supply chain management. Secondly, it investigates recent applications of RFID as an information and communication technology support facility in construction logistics for the management of construction supply chain. Based on these the study develops an improved concept of a construction logistics architecture that explicitly relies on integrating RFID with the Global Positioning System (GPS). The developed conceptual model architecture shows that categorisation provided through RFID and traceability as a result of RFID/GPS integration could be used as a tool to identify, record and share potential problems and thus vastly improve knowledge management processes within the entire supply chain. The findings thus clearly show a need for future research in this area.
Resumo:
Commitment of employees is relatively low in construction. This problem is exasperated by companies inability to attract, motivate, and retain talent that is then often channelled into other more attractive industrial sectors where the prospects, conditions and rewards are perceived to be much higher. The purpose of this study is thus primarily to develop a generic model to maximise employees' engagement, improve their motivation and increase the retention levels. To achieve this aim, the investigation looks into how perceived employment obligations and expectations impact commitment and through that organisational performance. The study is based on the postulations of Luhmann's theory of social systems with communication viewed as a constitutive element of a social system. Consequently expectations of a particular party in an employment relationship are represented in a communicative space requiring the other party's understanding in order to align expectations of both sides in the relationship. Explicitly, alignment of by an employee perceived manager's expectations determines his/ her commitment to fulfil obligations towards the manager. The result of this first stage of research is a conceptual model developed following the substantial supporting evidence in the literature and it forms the framework for mitigation of low commitment, motivation and retention of employees. The model particularly focuses on factors affecting employees' perceived expectations like reneging, incongruence and the process of communication. In the future the model will be validated using empirical data from a combination of observational and enquiry-based research. Once completed, the model will provide a framework for informing Human Resource Management policies with the aim to improve commitment of employees, increase the levels of retention and consequently improve the performance of construction organisations.
Resumo:
The aim of this study is to explore the environmental factors that determine plant Community distribution in northeast Algeria. This paper provides a quantitative analysis of the vegetation-environment relationships for a study site in the Cholt El Beida wetland, a RAMSAR site in Setif, Algeria. Sixty vegetation plots were sampled and analysed using TWINSPAN and Detrended Correspondence Analysis (DCA) in order to identify the principal vegetation communities and determine the environmental gradients associated with these. 127 species belonging to 41 families and 114 genera were recorded. Six of the recorded species were endemic representing 4.7% of the total species. The richest families were Compositae, Gramineae, Cruciferae and Chenopodiaceae. Therophytes and hemicryptophytes were the most frequent life forms. the Mediterranean floristic element is dominant and is represented by 39 species. The samples were classified into four main community types. The principal DCA axes represent gradients of soil salinity, moisture and anthropogenic pressure. The use of classification in combination with ordination techniques resulted in a good discrimination between plant communities and a greater understanding of controlling environmental factors. The methodology adopted can be employed for improving baseline information on plant community ecology and distribution in often critically endangered Mediterranean wetland areas. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Promotion of adherence to healthy-eating norms has become an important element of nutrition policy in the United States and other developed countries. We assess the potential consumption impacts of adherence to a set of recommended dietary norms in the United States using a mathematical programming approach. We find that adherence to recommended dietary norms would involve significant changes in diets, with large reductions in the consumption of fats and oils along with large increases in the consumption of fruits, vegetables, and cereals. Compliance with norms recommended by the World Health Organization for energy derived from sugar would involve sharp reductions in sugar intakes. We also analyze how dietary adjustments required vary across demographic groups. Most socio-demographic characteristics appear to have relatively little influence on the pattern of adjustment required to comply with norms, Income levels have little effect on required dietary adjustments. Education is the only characteristic to have a significant influence on the magnitude of adjustments required. The least educated rather than the poorest have to bear the highest burden of adjustment. Out- analysis suggests that fiscal measures like nutrient-based taxes may not be as regressive as commonly believed. Dissemination of healthy-eating norms to the less educated will be a key challenge for nutrition policy.
Resumo:
Promotion of adherence to healthy-eating norms has become an important element of nutrition policy in the United States and other developed countries. We assess the potential consumption impacts of adherence to a set of recommended dietary norms in the United States using a mathematical programming approach. We find that adherence to recommended dietary norms would involve significant changes in diets, with large reductions in the consumption of fats and oils along with large increases in the consumption of fruits, vegetables, and cereals. Compliance with norms recommended by the World Health Organization for energy derived from sugar would involve sharp reductions in sugar intakes. We also analyze how dietary adjustments required vary across demographic groups. Most socio-demographic characteristics appear to have relatively little influence on the pattern of adjustment required to comply with norms, Income levels have little effect on required dietary adjustments. Education is the only characteristic to have a significant influence on the magnitude of adjustments required. The least educated rather than the poorest have to bear the highest burden of adjustment. Out- analysis suggests that fiscal measures like nutrient-based taxes may not be as regressive as commonly believed. Dissemination of healthy-eating norms to the less educated will be a key challenge for nutrition policy.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.