22 resultados para common mode current
em CentAUR: Central Archive University of Reading - UK
Resumo:
A common mode whereby destruction of coastal lowlands occurs is frontal erosion. The edge cliffing, nonetheless, is also an inherent aspect of salt marsh development in many northwest European tidal marshes. Quite a few geomorphologists in the earlier half of the past century recognized such edge erosion as a definite repetitive stage within an autocyclic mode of marsh growth. A shift in research priorities during the past decades (primarily because of coastal management concerns, however) has resulted in an enhanced focus on sediment-flux measurement campaigns on salt marshes. This, somewhat "object-oriented" strategy hindered any further development of the once-established autocyclic growth concept, which virtually has gone into oblivion in recent times. This work makes an attempt to resurrect the notion of autocyclicity by employing its premises to address edge erosion in tidal marshes. Through a review of intertidal morphosedimentology the underlying framework for autocyclicity is envisaged. The phenomenon is demonstrated in the Holocene salt marsh plain of Moricambe basin in NW England that displays several distinct phases of marsh retreat in the form of abandoned clifflets. The suite of abandoned shorelines and terraces has been identified in detailed field mapping that followed analysis of topographic maps and aerial photographs. Vertical trends in marsh plain sediments are recorded in trenches for signs of past marsh front movements. The characteristic sea level history of the area offers an opportunity to differentiate the morphodynamic variability induced in the autocyclic growth of the marsh plain in scenarios of rising and falling sea level and the accompanied change in sediment budget. The ideas gathered are incorporated to construct a conceptual model that links temporal extent of marsh erosion to inner tidal flat sediment budget and sea level tendency. The review leads to recognition of the necessity of adopting an holistic approach in the morphodynamic investigations where marshes should be treated as a component within the "marsh-mudflat system" as each element apparently modulates evolution of the other, with an eventual linkage to subtidal channels. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Conserved among all coronaviruses are four structural proteins: the matrix (M), small envelope (E), and spike (S) proteins that are embedded in the viral membrane and the nucleocapsid phosphoprotein (N), which exists in a ribonucleoprotein complex in the lumen. The N-terminal domain of coronaviral N proteins (N-NTD) provides a scaffold for RNA binding, while the C-terminal domain (N-CTD) mainly acts as oligomerization modules during assembly. The C terminus of the N protein anchors it to the viral membrane by associating with M protein. We characterized the structures of N-NTD from severe acute respiratory syndrome coronavirus (SARS-CoV) in two crystal forms, at 1.17 A (monoclinic) and at 1.85 A (cubic), respectively, resolved by molecular replacement using the homologous avian infectious bronchitis virus (IBV) structure. Flexible loops in the solution structure of SARS-CoV N-NTD are now shown to be well ordered around the beta-sheet core. The functionally important positively charged beta-hairpin protrudes out of the core, is oriented similarly to that in the IBV N-NTD, and is involved in crystal packing in the monoclinic form. In the cubic form, the monomers form trimeric units that stack in a helical array. Comparison of crystal packing of SARS-CoV and IBV N-NTDs suggests a common mode of RNA recognition, but they probably associate differently in vivo during the formation of the ribonucleoprotein complex. Electrostatic potential distribution on the surface of homology models of related coronaviral N-NTDs suggests that they use different modes of both RNA recognition and oligomeric assembly, perhaps explaining why their nucleocapsids have different morphologies.
Resumo:
At its inception, the paradigm of system dynamics was deliberately made distinct from that of OR. Yet developments in soft OR now have much in common with current system dynamics modeling practice. This article briefly traces the parallel development of system dynamics and soft OR, and argues that a dialogue between the two would be mutually rewarding. to support this claim, examples of soft OR tools are described along with some of the field’s philosophical grounding and current issues. Potential benefits resulting from a dialogue are explored, with particular emphasis on the methodological framework of system dynamics and the need for a complementarist approach. The article closes with some suggestions on how to begin learning from.
Resumo:
Microcontroller-based peak current mode control of a buck converter is investigated. The new solution uses a discrete time controller with digital slope compensation. This is implemented using only a single-chip microcontroller to achieve desirable cycle-by-cycle peak current limiting. The digital controller is implemented as a two-pole, two-zero linear difference equation designed using a continuous time model of the buck converter and a discrete time transform. Subharmonic oscillations are removed with digital slope compensation using a discrete staircase ramp. A 16 W hardware implementation directly compares analog and digital control. Frequency response measurements are taken and it is shown that the crossover frequency and expected phase margin of the digital control system match that of its analog counterpart.
Resumo:
The toughness of a polymer glass is determined by the interplay of yielding, strain softening, and strain hardening. Molecular-dynamics simulations of a typical polymer glass, atactic polystyrene, under the influence of active deformation have been carried out to enlighten these processes. It is observed that the dominant interaction for the yield peak is of interchain nature and for the strain hardening of intrachain nature. A connection is made with the microscopic cage-to-cage motion. It is found that the deformation does not lead to complete erasure of the thermal history but that differences persist at large length scales. Also we find that the strain-hardening modulus increases with increasing external pressure. This new observation cannot be explained by current theories such as the one based on the entanglement picture and the inclusion of this effect will lead to an improvement in constitutive modeling.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Objectives - To assess the general public's interpretation of the verbal descriptors for side effect frequency recommended for use in medicine information leaflets by a European Union (EU) guideline, and to examine the extent to which differences in interpretation affect people's perception of risk and their judgments of intention to comply with the prescribed treatment. Method - Two studies used a controlled empirical methodology in which people were presented with a hypothetical, but realistic, scenario about visiting their general practitioner and being prescribed medication. They were given an explanation that focused on the side effects of the medicine, together with information about the probability of occurrence using either numerical percentages or the corresponding EU verbal descriptors. Interpretation of the descriptors was assessed. In study 2, participants were also required to make various judgments, including risk to health and intention to comply. Key findings - In both studies, use of the EU recommended descriptors led to significant overestimations of the likelihood of particular side effects occurring. Study 2 further showed that the "overestimation" resulted in significantly increased ratings of perceived severity of side effects and risk to health, as well as significantly reduced ratings of intention to comply, compared with those for people who received the probability information in numerical form. Conclusion - While it is recognised that the current findings require replication in a clinical setting, the European and national authorities should suspend the use of the EU recommended terms until further research is available to allow the use of an evidence-based approach.
Resumo:
The wild common bean (Phaseolus vulgaris) is widely but discontinuously distributed from northern Mexico to northern Argentina on both sides of the Isthmus of Panama. Little is known on how the species has reached its current disjunct distribution. In this research, chloroplast DNA polymorphisms in seven non-coding regions were used to study the history of migration of wild P. vulgaris between Mesoamerica and South America. A penalized likelihood analysis was applied to previously published Leguminosae ITS data to estimate divergence times between P. vulgaris and its sister taxa from Mesoamerica, and divergence times of populations within P. vulgaris. Fourteen chloroplast haplotypes were identified by PCR-RFLP and their geographical associations were studied by means of a Nested Clade Analysis and Mantel Tests. The results suggest that the haplotypes are not randomly distributed but occupy discrete parts of the geographic range of the species. The current distribution of haplotypes may be explained by isolation by distance and by at least two migration events between Mesoamerica and South America: one from Mesoamerica to South America and another one from northern South America to Mesoamerica. Age estimates place the divergence of P. vulgaris from its sister taxa from Mesoamerica at or before 1.3 Ma, and divergence of populations from Ecuador-northern Peru at or before 0.6 Ma. As these ages are taken as minimum divergence times, the influence of past events, such as the closure of the Isthmus of Panama and the final uplift of the Andes, on the migration history and population structure of this species cannot be disregarded.
Resumo:
Cloud optical depth is one of the most poorly observed climate variables. The new “cloud mode” capability in the Aerosol Robotic Network (AERONET) will inexpensively yet dramatically increase cloud optical depth observations in both number and accuracy. Cloud mode optical depth retrievals from AERONET were evaluated at the Atmospheric Radiation Measurement program’s Oklahoma site in sky conditions ranging from broken clouds to overcast. For overcast cases, the 1.5 min average AERONET cloud mode optical depths agreed to within 15% of those from a standard ground‐based flux method. For broken cloud cases, AERONET retrievals also captured rapid variations detected by the microwave radiometer. For 3 year climatology derived from all nonprecipitating clouds, AERONET monthly mean cloud optical depths are generally larger than cloud radar retrievals because of the current cloud mode observation strategy that is biased toward measurements of optically thick clouds. This study has demonstrated a new way to enhance the existing AERONET infrastructure to observe cloud optical properties on a global scale.
Resumo:
Due to the pivotal role played by human serum albumin (HSA) in the transport and cytotoxicity of titanocene complexes, a docking study has been performed on a selected set of titanocene complexes to aid in the current understanding of the potential mode of action of these titanocenes upon binding HSA. Analysis of the docking results has revealed potential binding at the known drug binding sites in HSA and has provided some explanation for the specificity and subsequent cytotoxicity of these titanocenes. Additionally, a new alternative binding site for these titanocenes has been postulated.
Resumo:
Tourism is the worlds largest employer, accounting for 10% of jobs worldwide (WTO, 1999). There are over 30,000 protected areas around the world, covering about 10% of the land surface(IUCN, 2002). Protected area management is moving towards a more integrated form of management, which recognises the social and economic needs of the worlds finest areas and seeks to provide long term income streams and support social cohesion through active but sustainable use of resources. Ecotourism - 'responsible travel to natural areas that conserves the environment and improves the well- being of local people' (The Ecotourism Society, 1991) - is often cited as a panacea for incorporating the principles of sustainable development in protected area management. However, few examples exist worldwide to substantiate this claim. In reality, ecotourism struggles to provide social and economic empowerment locally and fails to secure proper protection of the local and global environment. Current analysis of ecotourism provides a useful checklist of interconnected principles for more successful initiatives, but no overall framework of analysis or theory. This paper argues that applying common property theory to the application of ecotourism can help to establish more rigorous, multi-layered analysis that identifies the institutional demands of community based ecotourism (CBE). The paper draws on existing literature on ecotourism and several new case studies from developed and developing countries around the world. It focuses on the governance of CBE initiatives, particularly the interaction between local stakeholders and government and the role that third party non-governmental organisations can play in brokering appropriate institutional arrangements. The paper concludes by offering future research directions."
Resumo:
Current methods for estimating event-related potentials (ERPs) assume stationarity of the signal. Empirical Mode Decomposition (EMD) is a data-driven decomposition technique that does not assume stationarity. We evaluated an EMD-based method for estimating the ERP. On simulated data, EMD substantially reduced background EEG while retaining the ERP. EMD-denoised single trials also estimated shape, amplitude, and latency of the ERP better than raw single trials. On experimental data, EMD-denoised trials revealed event-related differences between two conditions (condition A and B) more effectively than trials lowpass filtered at 40 Hz. EMD also revealed event-related differences on both condition A and condition B that were clearer and of longer duration than those revealed by low-pass filtering at 40 Hz. Thus, EMD-based denoising is a promising data-driven, nonstationary method for estimating ERPs and should be investigated further.
Resumo:
This paper takes as its starting point the assertion that current rangeland management in the central Eastern Cape Province (former Ciskei) of South Africa, is characterised primarily by an ‘open access’ approach. Empirical material drawn from three case-study communities in the region is used to examine the main barriers to management of rangeland as a ‘commons’. The general inability to define and enforce rights to particular grazing resourses in the face of competing claims from ‘outsiders’, as well as inadequate local institutions responsible for rangeland management are highlighted as being of key importance. These are often exacerbated by lack of available grazing land, diffuse user groups and local political and ethnic divisions. Many of these problems have a strong legacy in historical apartheid policies such as forced resettlement and betterment planning. On this basis it is argued that policy should focus on facilitating the emergence of effective, local institutions for rangeland management. Given the limited grazing available to many communities in the region, a critical aspect of this will be finding ways to legitimise current patterns of extensive resource use, which traverse existing ‘community’ boundaries. However, this runs counter to recent legislation, which strongly links community management with legal ownership of land within strict boundaries often defined through fencing. Finding ways to overcome this apparent disjuncture between theory and policy will be vital for the effective management of common pool grazing resources in the region.
Resumo:
The internal variability and coupling between the stratosphere and troposphere in CCMVal‐2 chemistry‐climate models are evaluated through analysis of the annular mode patterns of variability. Computation of the annular modes in long data sets with secular trends requires refinement of the standard definition of the annular mode, and a more robust procedure that allows for slowly varying trends is established and verified. The spatial and temporal structure of the models’ annular modes is then compared with that of reanalyses. As a whole, the models capture the key features of observed intraseasonal variability, including the sharp vertical gradients in structure between stratosphere and troposphere, the asymmetries in the seasonal cycle between the Northern and Southern hemispheres, and the coupling between the polar stratospheric vortices and tropospheric midlatitude jets. It is also found that the annular mode variability changes little in time throughout simulations of the 21st century. There are, however, both common biases and significant differences in performance in the models. In the troposphere, the annular mode in models is generally too persistent, particularly in the Southern Hemisphere summer, a bias similar to that found in CMIP3 coupled climate models. In the stratosphere, the periods of peak variance and coupling with the troposphere are delayed by about a month in both hemispheres. The relationship between increased variability of the stratosphere and increased persistence in the troposphere suggests that some tropospheric biases may be related to stratospheric biases and that a well‐simulated stratosphere can improve simulation of tropospheric intraseasonal variability.
Resumo:
The purpose of this lecture is to review recent development in data analysis, initialization and data assimilation. The development of 3-dimensional multivariate schemes has been very timely because of its suitability to handle the many different types of observations during FGGE. Great progress has taken place in the initialization of global models by the aid of non-linear normal mode technique. However, in spite of great progress, several fundamental problems are still unsatisfactorily solved. Of particular importance is the question of the initialization of the divergent wind fields in the Tropics and to find proper ways to initialize weather systems driven by non-adiabatic processes. The unsatisfactory ways in which such processes are being initialized are leading to excessively long spin-up times.