95 resultados para Abstraction Hierarchy
Resumo:
The Environmental Data Abstraction Library provides a modular data management library for bringing new and diverse datatypes together for visualisation within numerous software packages, including the ncWMS viewing service, which already has very wide international uptake. The structure of EDAL is presented along with examples of its use to compare satellite, model and in situ data types within the same visualisation framework. We emphasize the value of this capability for cross calibration of datasets and evaluation of model products against observations, including preparation for data assimilation.
Resumo:
The Ramsar site of Lake Uluabat, western Turkey, suffers from eutrophication, urban and industrial pollution and water abstraction, and its water levels are managed artificially. Here we combine monitoring and palaeolimnological. techniques to investigate spatial and temporal limnological variability and ecosystem impact, using an ostracod and mollusc survey to strengthen interpretation of the fossil record. A combination of low invertebrate Biological Monitoring Working Party scores (<10) and the dominance of eutrophic diatoms in the modern lake confirms its poor ecological status. Palaeolimnological analysis of recent (last >200 yr) changes in organic and carbonate content, diatoms, stable isotopes, ostracods and molluscs in a lake sediment core (UL20A) indicates a 20th century trend towards increased sediment accumulation rates and eutrophication which was probably initiated by deforestation and agriculture. The most marked ecological shift occurs in the early 1960s, however. A subtle rise in diatom-inferred total phosphorus and an inferred reduction in submerged aquatic macrophyte cover accompanies a major increase in sediment accumulation rate. An associated marked shift in ostracod stable isotope data indicative of reduced seasonality and a change in hydrological input suggests major impact from artificial water management practices, all of which appears to have culminated in the sustained loss of submerged macrophytes since 2000. The study indicates it is vital to take both land-use and water management practices into account in devising restoration strategies. in a wider context, the results have important implications for the conservation of shallow karstic lakes, the functioning of which is still poorly understood. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Understanding and predicting changes in storm tracks over longer time scales is a challenging problem, particularly in the North Atlantic. This is due in part to the complex range of forcings (land–sea contrast, orography, sea surface temperatures, etc.) that combine to produce the structure of the storm track. The impact of land–sea contrast and midlatitude orography on the North Atlantic storm track is investigated through a hierarchy of GCM simulations using idealized and “semirealistic” boundary conditions in a high-resolution version of the Hadley Centre atmosphere model (HadAM3). This framework captures the large-scale essence of features such as the North and South American continents, Eurasia, and the Rocky Mountains, enabling the results to be applied more directly to realistic modeling situations than was possible with previous idealized studies. The physical processes by which the forcing mechanisms impact the large-scale flow and the midlatitude storm tracks are discussed. The characteristics of the North American continent are found to be very important in generating the structure of the North Atlantic storm track. In particular, the southwest–northeast tilt in the upper tropospheric jet produced by southward deflection of the westerly flow incident on the Rocky Mountains leads to enhanced storm development along an axis close to that of the continent’s eastern coastline. The approximately triangular shape of North America also enables a cold pool of air to develop in the northeast, intensifying the surface temperature contrast across the eastern coastline, consistent with further enhancements of baroclinicity and storm growth along the same axis.
Resumo:
This paper reports three experiments that examine the role of similarity processing in McGeorge and Burton's (1990) incidental learning task. In the experiments subjects performed a distractor task involving four-digit number strings, all of which conformed to a simple hidden rule. They were then given a forced-choice memory test in which they were presented with pairs of strings and were led to believe that one string of each pair had appeared in the prior learning phase. Although this was not the case, one string of each pair did conform to the hidden rule. Experiment 1 showed that, as in the McGeorge and Burton study, subjects were significantly more likely to select test strings that conformed to the hidden rule. However, additional analyses suggested that rather than having implicitly abstracted the rule, subjects may have been selecting strings that were in some way similar to those seen during the learning phase. Experiments 2 and 3 were designed to try to separate out effects due to similarity from those due to implicit rule abstraction. It was found that the results were more consistent with a similarity-based model than implicit rule abstraction per se.
Resumo:
The strength of the Antarctic Circumpolar Current (ACC) is believed to depend on the westerly wind stress blowing over the Southern Ocean, although the exact relationship between winds and circumpolar transport is yet to be determined. Here we show, based on theoretical arguments and a hierarchy of numerical modeling experiments, that the global pycnocline depth and the baroclinic ACC transport are set by an integral measure of the wind stress over the path of the ACC, taking into account its northward deflection. Our results assume that the mesoscale eddy diffusivity is independent of the mean flow; while the relationship between wind stress and ACC transport will be more complicated in an eddy-saturated regime, our conclusion that the ACC is driven by winds over the circumpolar streamlines is likely to be robust.
Resumo:
During the past 15 years, a number of initiatives have been undertaken at national level to develop ocean forecasting systems operating at regional and/or global scales. The co-ordination between these efforts has been organized internationally through the Global Ocean Data Assimilation Experiment (GODAE). The French MERCATOR project is one of the leading participants in GODAE. The MERCATOR systems routinely assimilate a variety of observations such as multi-satellite altimeter data, sea-surface temperature and in situ temperature and salinity profiles, focusing on high-resolution scales of the ocean dynamics. The assimilation strategy in MERCATOR is based on a hierarchy of methods of increasing sophistication including optimal interpolation, Kalman filtering and variational methods, which are progressively deployed through the Syst`eme d’Assimilation MERCATOR (SAM) series. SAM-1 is based on a reduced-order optimal interpolation which can be operated using ‘altimetry-only’ or ‘multi-data’ set-ups; it relies on the concept of separability, assuming that the correlations can be separated into a product of horizontal and vertical contributions. The second release, SAM-2, is being developed to include new features from the singular evolutive extended Kalman (SEEK) filter, such as three-dimensional, multivariate error modes and adaptivity schemes. The third one, SAM-3, considers variational methods such as the incremental four-dimensional variational algorithm. Most operational forecasting systems evaluated during GODAE are based on least-squares statistical estimation assuming Gaussian errors. In the framework of the EU MERSEA (Marine EnviRonment and Security for the European Area) project, research is being conducted to prepare the next-generation operational ocean monitoring and forecasting systems. The research effort will explore nonlinear assimilation formulations to overcome limitations of the current systems. This paper provides an overview of the developments conducted in MERSEA with the SEEK filter, the Ensemble Kalman filter and the sequential importance re-sampling filter.
Resumo:
Many different individuals, who have their own expertise and criteria for decision making, are involved in making decisions on construction projects. Decision-making processes are thus significantly affected by communication, in which a dynamic performance of human intentions leads to unpredictable outcomes. In order to theorise the decision making processes including communication, it is argued here that the decision making processes resemble evolutionary dynamics in terms of both selection and mutation, which can be expressed by the replicator-mutator equation. To support this argument, a mathematical model of decision making has been made from an analogy with evolutionary dynamics, in which there are three variables: initial support rate, business hierarchy, and power of persuasion. On the other hand, a survey of patterns in decision making in construction projects has also been performed through self-administered mail questionnaire to construction practitioners. Consequently, comparison between the numerical analysis of mathematical model and the statistical analysis of empirical data has shown a significant potential of the replicator-mutator equation as a tool to study dynamic properties of intentions in communication.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
R. H. Whittaker's idea that plant diversity can be divided into a hierarchy of spatial components from alpha at the within-habitat scale through beta for the turnover of species between habitats to gamma along regional gradients implies the underlying existence of alpha, beta, and gamma niches. We explore the hypothesis that the evolution of a, (3, and gamma niches is also hierarchical, with traits that define the a niche being labile, while those defining a and 7 niches are conservative. At the a level we find support for the hypothesis in the lack of close significant phylogenetic relationship between meadow species that have similar a niches. In a second test, a niche overlap based on a variety of traits is compared between congeners and noncongeners in several communities; here, too, there is no evidence of a correlation between a niche and phylogeny. To test whether beta and gamma niches evolve conservatively, we reconstructed the evolution of relevant traits on evolutionary trees for 14 different clades. Tests against null models revealed a number of instances, including some in island radiations, in which habitat (beta niche) and elevational maximum (an aspect of the gamma niche) showed evolutionary conservatism.
Resumo:
Grid workflow authoring tools are typically specific to particular workflow engines built into Grid middleware, or are application specific and are designed to interact with specific software implementations. g-Eclipse is a middleware independent Grid workbench that aims to provide a unified abstraction of the Grid and includes a Grid workflow builder to allow users to author and deploy workflows to the Grid. This paper describes the g-Eclipse Workflow Builder and its implementations for two Grid middlewares, gLite and GRIA, and a case study utilizing the Workflow Builder in a Grid user's scientific workflow deployment.
Resumo:
Crop wild relatives are an important socio-economic resource that is currently being eroded or even extinguished through careless human activities. If the Conference of the Parties (COP) to the CBD 2010 Biodiversity Target of achieving a significant reduction in the current rate of loss is to be achieved, we must first define what crop wild relatives are and how their conservation might be prioritised. A definition of a crop wild relative is proposed and illustrated in the light of previous Gene Pool concept theory. Where crossing and genetic diversity information is unavailable, the Taxon Group concept is introduced to assist recognition of the degree of crop wild relative relatedness by using the existing taxonomic hierarchy.
Resumo:
To study the potential involvement of inhibin A (inhA), inhibin B (inhB), activin A (actA) and follistatin (FS) in the recruitment of follicles into the preovulatory hierarchy, growing follicles (ranging from 1 mm to the largest designated F1) and the three most recent postovulatory follicles (POFs) were recovered from laying hens (n=11). With the exception of <4 mm follicles and POFs, follicle walls were dissected into separate granulosa (G) and theca (T) layers before extraction. Contents of inhA, inhB, actA and FS in tissue extracts were assayed using specific two-site ELISAs and results are expressed per mg DNA. InhB content of both G and T followed a similar developmental pattern, although the content was >4-fold higher in G than in T at all stages. InhB content was very low in follicles <4 nun but increased - 50-fold (P<0.0001) to peak in 7-9 mm follicles, before falling steadily as follicles entered and moved up the follicular hierarchy (40-fold; 8 mm vs F2). In stark contrast, inhA remained very low in prehierarchical follicles (&LE; 9 mm) but then increased progressively as follicles moved up the preovulatory hierarchy to peak in F1 (&SIM; 100-fold increase; P<0.0001); In F1 >97% of inhA was confined to the G layer whereas in 5-9 mm follicles inhA was only. detected in the T layer. Both inhA and inhB contents of POFs were significantly reduced compared with F1. Follicular actA was mainly confined to the T layer although detectable levels were present in G from 9 nun; actA was low between 1 and 9 mm but increased sharply as follicles entered the preovulatory hierarchy (&SIM;6-fold higher in F4; P<0.0001); levels then fell &SIM;2-fold as the follicle progressed to F1. Like actA, FS predominated in the T although significant amounts were also present in the G of prehierarchical follicles (4-9 mm), in contrast to actA, which was absent from the G. The FS content of T rose &SIM;3-fold from 6 mm to a plateau which was sustained until F1. In contrast, the FS content of G was greatest in prehierarchical follicles and fell &SIM;4-fold in F4-F1 follicles. ActA and FS contents of POFs were reduced compared with F1. In vitro studies on follicle wall explants confirmed the striking divergence in the secretion of inhA and inhB during follicle development. These findings of marked stage-dependent differences in the expression of inhA, inhB, actA and FS proteins imply a significant functional role for these peptides in the recruitment and ordered progression of follicles within the avian ovary.
Resumo:
Time-resolved kinetic studies of the reaction of silylene, SiH2, generated by laser flash photolysis of both silacyclopent-3-ene and phenylsilane, have been carried out to obtain second-order rate constants for its reaction with CH3Cl. The reaction was studied in the gas phase at six temperatures in the range 294-606 K. The second-order rate constants gave a curved Arrhenius plot with a minimum value at T approximate to 370 K. The reaction showed no pressure dependence in the presence of up to 100 Torr SF6. The rate constants, however, showed a weak dependence on laser pulse energy. This suggests an interpretation requiring more than one contributing reaction pathway to SiH2 removal. Apart from a direct reaction of SiH2 with CH3Cl, reaction of SiH2 with CH3 (formed by photodissociation of CH3Cl) seems probable, with contributions of up to 30% to the rates. Ab initio calculations (G3 level) show that the initial step of reaction of SiH2 with CH3Cl is formation of a zwitterionic complex (ylid), but a high-energy barrier rules out the subsequent insertion step. On the other hand, the Cl-abstraction reaction leading to CH3 + ClSiH2 has a low barrier, and therefore, this seems the most likely candidate for the main reaction pathway of SiH2 with CH3Cl. RRKM calculations on the abstraction pathway show that this process alone cannot account for the observed temperature dependence of the rate constants. The data are discussed in light of studies of other silylene reactions with haloalkanes.
Resumo:
Amyloid fibrils are typically rigid, unbrariched structures with diameters of similar to 10 nm and lengths up to several micrometres, and are associated with more than 20 diseases including Alzheimer's disease and type II diabetes. Insulin is a small, predominantly alpha-helical protein consisting of 51 residues in two disulfide-linked polypeptide chains that readily assembles into amyloid fibrils under conditions of low PH and elevated temperature. We demonstrate here that both the A-chain and the B-chain of insulin are capable of forming amyloid fibrils in isolation under similar conditions, with fibrillar morphologies that differ from those composed of intact insulin. Both the A-chain and B-chain fibrils were found to be able to cross-seed the fibrillization of the parent protein, although these reactions were substantially less efficient than self-seeding with fibrils composed of full-length insulin. In both cases, the cross-seeded fibrils were morphologically distinct from the seeding, material, but shared common characteristics with typical insulin fibrils, including a very similar helical repeat. The broader distribution of heights of the cross-seeded fibrils compared to typical insulin fibrils, however, indicates that their underling protofilament hierarchy may be subtly different. In addition, and remarkably in view of this seeding behavior, the soluble forms of the A-chain and B-chain peptides were found to be capable of inhibiting insulin fibril formation. Studies using mass spectrometry suggest that this behavior might be attributable to complex formation between insulin and the A-chain and B-chain peptides. The finding that the same chemical form of a polypeptide chain in different physical states can either stimulate or inhibit the conversion of a protein into amyloid fibrils sheds new light on the mechanisms underlying fibril formation, fibril strain propagation and amyloid disease initiation and progression. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Irradiation of argon matrices at 12 K containing hydrogen peroxide and tetrachloroethene using the output from a medium-pressure mercury lamp gives rise to the carbonyl compound trichloroacetyl chloride (CCl3CClO). Similarly trichloroethene gives dichloroacetyl chloride ( CCl2HCClO) - predominantly in the gauche form - under the same conditions. It appears that the reaction is initiated by homolysis of the O-O bond of H2O2 to give OH radicals, one of which adds to the double bond of an alkene molecule. The reaction then proceeds by abstraction of the H atom of the hydroxyl group and Cl-atom migration. This mechanism has been explored by the use of DFT calculations to back up the experimental findings. The mechanism is analogous to that shown by the simple hydrocarbon alkenes.