91 resultados para computational efficiency


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the numerical accuracy, computational cost, and memory requirements of self-consistent field theory (SCFT) calculations when the diffusion equations are solved with various pseudo-spectral methods and the mean field equations are iterated with Anderson mixing. The different methods are tested on the triply-periodic gyroid and spherical phases of a diblock-copolymer melt over a range of intermediate segregations. Anderson mixing is found to be somewhat less effective than when combined with the full-spectral method, but it nevertheless functions admirably well provided that a large number of histories is used. Of the different pseudo-spectral algorithms, the 4th-order one of Ranjan, Qin and Morse performs best, although not quite as efficiently as the full-spectral method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There exist two central measures of turbulent mixing in turbulent stratified fluids that are both caused by molecular diffusion: 1) the dissipation rate D(APE) of available potential energy APE; 2) the turbulent rate of change Wr, turbulent of background gravitational potential energy GPEr. So far, these two quantities have often been regarded as the same energy conversion, namely the irreversible conversion of APE into GPEr, owing to the well known exact equality D(APE)=Wr, turbulent for a Boussinesq fluid with a linear equation of state. Recently, however, Tailleux (2009) pointed out that the above equality no longer holds for a thermally-stratified compressible, with the ratio ξ=Wr, turbulent/D(APE) being generally lower than unity and sometimes even negative for water or seawater, and argued that D(APE) and Wr, turbulent actually represent two distinct types of energy conversion, respectively the dissipation of APE into one particular subcomponent of internal energy called the "dead" internal energy IE0, and the conversion between GPEr and a different subcomponent of internal energy called "exergy" IEexergy. In this paper, the behaviour of the ratio ξ is examined for different stratifications having all the same buoyancy frequency N vertical profile, but different vertical profiles of the parameter Υ=α P/(ρCp), where α is the thermal expansion coefficient, P the hydrostatic pressure, ρ the density, and Cp the specific heat capacity at constant pressure, the equation of state being that for seawater for different particular constant values of salinity. It is found that ξ and Wr, turbulent depend critically on the sign and magnitude of dΥ/dz, in contrast with D(APE), which appears largely unaffected by the latter. These results have important consequences for how the mixing efficiency should be defined and measured in practice, which are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polyethylenimine (PEI) is an efficient nonviral gene delivery vector because of its high buffering capacity and DNA condensation ability. In our study, the amino groups on the polymeric backbone were acylated using acetic or propionic anhydride to alter the protonation behaviour and the hydrophilic/hydrophobic balance of the polymer. The concentration of acylated primary amines was determined using trinitrobenzene sulphonic acid assay. Results showed that our modified polymers had lower buffering capacities in solutions compared to PEI. The polymers were complexed with plasmid encoding enhanced green fluorescent protein at three different ratios (1:1, 1:2 and 1:10 w/w DNA to polymer) to form polyplexes and their toxicities and transfection efficiencies were evaluated in HEK 293 cells. Acylation reduced the number of primary amines on the polymer and the surface charge, improving haemocompatibility and reducing cytotoxicity. The reduction in the concentration of amino groups helped to optimise DNA compaction and facilitated polyplex dissociation in the cell, which increased transfection efficiency of the modified polymers compared to the parent polymer. Polymers with buffering capacities greater than 50% and less than 80% relative to PEI, showed higher transfection efficiencies than PEI. The propionic anhydride modified polymers had appropriate interactions with DNA which provided both DNA compaction and polyplex dissociation. These systems interacted better with the cell membrane because of their slightly higher lipophilicity and formed polyplexes which were less cytotoxic than polyplexes of acetic anhydride modified polymers. Among the vectors tested, 1:0.3 mol/mol PEI:propionic anhydride in a 1:2 w/w DNA:polymer composition provided the best transfection system with improved transfection efficiency and reduced cytotoxicity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Field studies were carried out on the water and sediment dynamics in the tropical, macro-tidal, Daly Estuary. The estuary is shallow, very-turbid, about 100 km long, and the entrance is funnel-shape. In the wet, high flow season, normal tidal ranges can be suppressed in the estuary, depending on inflow rates, and freshwater becomes dominant up to the mouth. At that time a fraction of the fine sediment load is exported offshore as a bottom-tagging nepheloid layer after the sediment falls out of suspension of the thin, near-surface, river plume. The remaining fraction and the riverine coarse sediment form a large sediment bar 10 km long, up to 6 m in height and extending across the whole width of the channel near the mouth. This bar, as well as shoals in the estuary, partially pond the mid- to upper-estuary. This bar builds up from the deposition of riverine sediment during a wet season with high runoff and can raise mean water level by up to 2 m in the upper estuary in the low flow season. This ponding effect takes about three successive dry years to disappear by the sediment forming the bar being redistributed all over the estuary by tidal pumping of fine and coarse sediment in the dry season, which is the low flow season. The swift reversal of the tidal currents from ebb to flood results in macro-turbulence that lasts about 20 min. Bed load transport is preferentially landward and occurs only for water currents greater than 0.6 m s(-1). This high value of the threshold velocity suggests that the sand may be cemented by the mud. The Daly Estuary thus is a leaky sediment trap with an efficiency varying both seasonally and inter-annually. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multiple factor parametrization is described to permit the efficient calculation of collision efficiency (E) between electrically charged aerosol particles and neutral cloud droplets in numerical models of cloud and climate. The four-parameter representation summarizes the results obtained from a detailed microphysical model of E, which accounts for the different forces acting on the aerosol in the path of falling cloud droplets. The parametrization's range of validity is for aerosol particle radii of 0.4 to 10 mu m, aerosol particle densities of I to 2.0 g cm(-3), aerosol particle charges from neutral to 100 elementary charges and drop radii from 18.55 to 142 mu m. The parametrization yields values of E well within an order of magnitude of the detailed model's values, from a dataset of 3978 E values. Of these values 95% have modelled to parametrized ratios between 0.5 and 1.5 for aerosol particle sizes ranging between 0.4 and 2.0 mu m, and about 96% in the second size range. This parametrization speeds up the calculation of E by a factor of similar to 10(3) compared with the original microphysical model, permitting the inclusion of electric charge effects in numerical cloud and climate models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nitrogen trifluoride (NF3) is an industrial gas used in the semiconductor industry as a plasma etchant and chamber cleaning gas. NF3 is an alternative to other potent greenhouse gases and its usage has increased markedly over the last decade. In recognition of its increased relevance and to aid planning of future usage we report an updated radiative efficiency and global warming potentials for NF3. Laboratory measurements give an integrated absorption cross section of 7.04 x 10(-17) cm(2) molecule(-1) cm(-1) over the spectral region 200 2000 cm(-1). The radiative efficiency is calculated to be 0.21 Wm(-2) ppbv(-1) and the 100 year GWP, relative to carbon dioxide, is 17200. These values are approximately 60% higher than previously published estimates, primarily reflecting the higher infrared absorption cross-sections reported here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An optimized protocol has been developed for the efficient and rapid genetic modification of sugar beet (Beta vulgaris L.). A polyethylene glycol-mediated DNA transformation technique could be applied to protoplast populations enriched specifically for a single totipotent cell type derived from stomatal guard cells, to achieve high transformation frequencies. Bialaphos resistance, conferred by the pat gene, produced a highly efficient selection system. The majority of plants were obtained within 8 to 9 weeks and were appropriate for plant breeding purposes. All were resistant to glufosinate-ammonium-based herbicides. Detailed genomic characterization has verified transgene integration, and progeny analysis showed Mendelian inheritance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we examine sources of technical efficiency for rice farming in Bangladesh. The motivation for the analysis is the need to close the rice yield gap to enable food security. We employ the DEA double bootstrap of Simar and Wilson (2007) to estimate and explain technical efficiency. This technique overcomes severe limitations inherent in using the two-stage DEA approach commonly employed in the efficiency literature. From a policy perspective our results show that potential efficiency gains to reduce the yield gap are greater than previously found. Statistically positive influences on technical efficiency are education, extension and credit, with age being a negative influence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.