74 resultados para Runs of homozygosity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stchastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, nonetheless SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS community, simulating the transitions between active and suppressed periods of tropical convection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pollutant plumes with enhanced concentrations of trace gases and aerosols were observed over the southern coast of West Africa during August 2006 as part of the AMMA wet season field campaign. Plumes were observed both in the mid and upper troposphere. In this study we examined the origin of these pollutant plumes, and their potential to photochemically produce ozone (O3) downwind over the Atlantic Ocean. Their possible contribution to the Atlantic O3 maximum is also discussed. Runs using the BOLAM mesoscale model including biomass burning carbon monoxide (CO) tracers were used to confirm an origin from central African biomass burning fires. The plumes measured in the mid troposphere (MT) had significantly higher pollutant concentrations over West Africa compared to the upper tropospheric (UT) plume. The mesoscale model reproduces these differences and the two different pathways for the plumes at different altitudes: transport to the north-east of the fire region, moist convective uplift and transport to West Africa for the upper tropospheric plume versus north-west transport over the Gulf of Guinea for the mid-tropospheric plume. Lower concentrations in the upper troposphere are mainly due to enhanced mixing during upward transport. Model simulations suggest that MT and UT plumes are 16 and 14 days old respectively when measured over West Africa. The ratio of tracer concentrations at 600 hPa and 250 hPa was estimated for 14–15 August in the region of the observed plumes and compares well with the same ratio derived from observed carbon dioxide (CO2) enhancements in both plumes. It is estimated that, for the period 1–15 August, the ratio of Biomass Burning (BB) tracer concentration transported in the UT to the ones transported in the MT is 0.6 over West Africa and the equatorial South Atlantic. Runs using a photochemical trajectory model, CiTTyCAT, initialized with the observations, were used to estimate in-situ net photochemical O3 production rates in these plumes during transport downwind of West Africa. The mid-troposphere plume spreads over altitude between 1.5 and 6 km over the Atlantic Ocean. Even though the plume was old, it was still very photochemically active (mean net O3 production rates over 10 days of 2.6 ppbv/day and up to 7 ppbv/day during the first days) above 3 km especially during the first few days of transport westward. It is also shown that the impact of high aerosol loads in the MT plume on photolysis rates serves to delay the peak in modelled O3 concentrations. These results suggest that a significant fraction of enhanced O3 in mid-troposphere over the Atlantic comes from BB sources during the summer monsoon period. According to simulated occurrence of such transport, BB may be the main source for O3 enhancement in the equatorial south Atlantic MT, at least in August 2006. The upper tropospheric plume was also still photochemically active, although mean net O3 production rates were slower (1.3 ppbv/day). The results suggest that, whilst the transport of BB pollutants to the UT is variable (as shown by the mesoscale model simulations), pollution from biomass burning can make an important contribution to additional photochemical production of O3 in addition to other important sources such as nitrogen oxides (NOx) from lightning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A time series of the observed transport through an array of moorings across the Mozambique Channel is compared with that of six model runs with ocean general circulation models. In the observations, the seasonal cycle cannot be distinguished from red noise, while this cycle is dominant in the transport of the numerical models. It is found, however, that the seasonal cycles of the observations and numerical models are similar in strength and phase. These cycles have an amplitude of 5 Sv and a maximum in September, and can be explained by the yearly variation of the wind forcing. The seasonal cycle in the models is dominant because the spectral density at other frequencies is underrepresented. Main deviations from the observations are found at depths shallower than 1500 m and in the 5/y–6/y frequency range. Nevertheless, the structure of eddies in the models is close to the observed eddy structure. The discrepancy is found to be related to the formation mechanism and the formation position of the eddies. In the observations, eddies are frequently formed from an overshooting current near the mooring section, as proposed by Ridderinkhof and de Ruijter (2003) and Harlander et al. (2009). This causes an alternation of events at the mooring section, varying between a strong southward current, and the formation and passing of an eddy. This results in a large variation of transport in the frequency range of 5/y–6/y. In the models, the eddies are formed further north and propagate through the section. No alternation similar to the observations is observed, resulting in a more constant transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Central Brazil, the long-term sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, ‘asset value of cattle (representing cattle ownership)' and ‘present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics, and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple ‘no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Central Brazil, the long-term, sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from. degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, 'asset value of cattle (representing cattle ownership and 'present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring caring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics,and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple 'no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews Bayesian procedures for phase 1 dose-escalation studies and compares different dose schedules and cohort sizes. The methodology described is motivated by the situation of phase 1 dose-escalation studiesin oncology, that is, a single dose administered to each patient, with a single binary response ("toxicity"' or "no toxicity") observed. It is likely that a wider range of applications of the methodology is possible. In this paper, results from 10000-fold simulation runs conducted using the software package Bayesian ADEPT are presented. Four designs were compared under six scenarios. The simulation results indicate that there are slight advantages of having more dose levels and smaller cohort sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From a statistician's standpoint, the interesting kind of isomorphism for fractional factorial designs depends on the statistical application. Combinatorially isomorphic fractional factorial designs may have different statistical properties when factors are quantitative. This idea is illustrated by using Latin squares of order 3 to obtain fractions of the 3(3) factorial. design in 18 runs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ultrastructure of a new microsporidian species Microgemmia vivaresi n. sp. causing liver cell xenoma formation in sea scorpions, Taurulus bubalis, is described. Stages of merogony, sporogony, and sporogenesis are mixed in the central cytoplasm of developing xenomas. All stages have unpaired nuclei. Uninucleate and multinucleate meronts lie within vacuoles formed from host endoplasmic reticulum and divide by binary or multiple fission. Sporonts, no longer in vacuoles, deposit plaques of surface coat on the plasma membrane that cause the surface to pucker. Division occurs at the Puckered stage into sporoblast mother cells, on which plaques join up to complete the surface coat. A final binary fission gives rise to sporoblasts. A dense globule, thought to be involved in polar tube synthesis, is gradually dispersed during spore maturation. Spores are broadly ovoid, have a large posterior vacuole, and measure 3.6 mu m x 2.1 pint (fresh). The polar tube has a short wide anterior section that constricts abruptly, then runs posteriad to coil about eight times around the posterior vacuole with granular contents. The polaroplast has up to 40 membranes arranged in pairs mostly attached to the wide region of the polar tube and directed posteriorty around a cytoplasm of a coarsely granular appearance. The species is placed alongside the type species Microgemmia hepaticus Ralphs and Matthews 1986 within the family Tetramicridae, which is transferred from the class Dihaplophasea to the class Haplophasea, as there is no evidence for the occurrence of a diplokaryotic phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While only about 1-200 species are used intensively in commercial floriculture (e.g. carnations, chrysanthemums, gerbera, narcissus, orchids, tulips, lilies, roses, pansies and violas, saintpaulias, etc.) and 4-500 as house plants, several thousand species of herbs, shrubs and trees are traded commercially by nurseries and garden centres as ornamentals or amenity species. Most of these have been introduced from the wild with little selection or breeding. In Europe alone, 12 000 species are found in cultivation in general garden collections (i.e. excluding specialist collections and botanic gardens). In addition, specialist collections (often very large) of many other species and/or cultivars of groups such as orchids, bromeliads, cacti and succulents, primulas, rhododendrons, conifers and cycads are maintained in several centres such as botanic gardens and specialist nurseries, as are 'national collections' of cultivated species and cultivars in some countries. Specialist growers, both professional and amateur, also maintain collections of plants for cultivation, including, increasingly, native plants. The trade in ornamental and amenity horticulture cannot be fully estimated but runs into many billions of dollars annually and there is considerable potential for further development and the introduction of many new species into the trade. Despite this, most of the collections are ad hoc and no co-ordinated efforts have been made to ensure that adequate germplasm samples of these species are maintained for conservation purposes and few of them are represented at all adequately in seed banks. Few countries have paid much attention to germplasm needs of ornamentals and the Ornamental Plant Germplasm Center in conjunction with the USDA National Plant Germplasm System at The Ohio State University is an exception. Generally there is a serious gap in national and international germplasm strategies, which have tended to focus primarily on food plants and some forage and industrial crops. Adequate arrangements need to be put in place to ensure the long- and medium-term conservation of representative samples of the genetic diversity of ornamental species. The problems of achieving this will be discussed. In addition, a policy for the conservation of old cultivars or 'heritage' varieties of ornamentals needs to be formulated. The considerable potential for introduction of new ornamental species needs to be assessed. Consideration needs to be given to setting up a co-ordinating structure with overall responsibility for the conservation of germplasm of ornamental and amenity plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New Mo(II) diimine derivatives of [Mo(q (3)allyl)X(CO)(2)(CH3CN)(2)] (allyl = C3H5 and C5H5O; X = Cl, Br) were prepared, and [MO(eta(3)-C3H5)Cl(CO)(2)(BIAN)] (BIAN = 1,4-(4-chloro)phenyl-2,3-naphthalene-diazabutadiene) (7) was structurally characterized by single-crystal X-ray diffraction. This complex adopted an equatorial-axial arrangement of the bidentate ligand (axial isomer), in contrast with the precursors, found as the equatorial isomer in the solid and fluxional in solution. The new complexes of the type [Mo(eta(3)-allyl)X(CO)(2)(N-N)l (N-N is a bidentate chelating dinitrogen ligand) were tested for the catalytic epoxidation of cyclooctene using tert-butyl hydroperoxide as oxidant. All catalytic systems were 100% selective toward epoxide formation. While their turnover frequencies paralleled those of related Mo(eta) carbonyl compounds or Mo(VI) compounds bearing similar N-donor ligands, they exhibited similar olefin conversions in consecutive catalytic runs. The acetonitrile precursors were generally more active than the diimine complexes, and the chloro derivatives more active than the bromo ones. Combined vibrational and NMR spectroscopy and computational studies (DFT) were used to investigate the nature of the molybdenum species formed in the catalytic system with [Mo(eta(3)-C3H5)Cl(CO)(2){1,4-(2,6-dimethyl)phenyl-2.3-dimethyldiazabuta diene}] (4) and to propose that the resulting species may be dimeric bearing oxide bridges.