977 resultados para computational efficiency


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There exist two central measures of turbulent mixing in turbulent stratified fluids that are both caused by molecular diffusion: 1) the dissipation rate D(APE) of available potential energy APE; 2) the turbulent rate of change Wr, turbulent of background gravitational potential energy GPEr. So far, these two quantities have often been regarded as the same energy conversion, namely the irreversible conversion of APE into GPEr, owing to the well known exact equality D(APE)=Wr, turbulent for a Boussinesq fluid with a linear equation of state. Recently, however, Tailleux (2009) pointed out that the above equality no longer holds for a thermally-stratified compressible, with the ratio ξ=Wr, turbulent/D(APE) being generally lower than unity and sometimes even negative for water or seawater, and argued that D(APE) and Wr, turbulent actually represent two distinct types of energy conversion, respectively the dissipation of APE into one particular subcomponent of internal energy called the "dead" internal energy IE0, and the conversion between GPEr and a different subcomponent of internal energy called "exergy" IEexergy. In this paper, the behaviour of the ratio ξ is examined for different stratifications having all the same buoyancy frequency N vertical profile, but different vertical profiles of the parameter Υ=α P/(ρCp), where α is the thermal expansion coefficient, P the hydrostatic pressure, ρ the density, and Cp the specific heat capacity at constant pressure, the equation of state being that for seawater for different particular constant values of salinity. It is found that ξ and Wr, turbulent depend critically on the sign and magnitude of dΥ/dz, in contrast with D(APE), which appears largely unaffected by the latter. These results have important consequences for how the mixing efficiency should be defined and measured in practice, which are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polyethylenimine (PEI) is an efficient nonviral gene delivery vector because of its high buffering capacity and DNA condensation ability. In our study, the amino groups on the polymeric backbone were acylated using acetic or propionic anhydride to alter the protonation behaviour and the hydrophilic/hydrophobic balance of the polymer. The concentration of acylated primary amines was determined using trinitrobenzene sulphonic acid assay. Results showed that our modified polymers had lower buffering capacities in solutions compared to PEI. The polymers were complexed with plasmid encoding enhanced green fluorescent protein at three different ratios (1:1, 1:2 and 1:10 w/w DNA to polymer) to form polyplexes and their toxicities and transfection efficiencies were evaluated in HEK 293 cells. Acylation reduced the number of primary amines on the polymer and the surface charge, improving haemocompatibility and reducing cytotoxicity. The reduction in the concentration of amino groups helped to optimise DNA compaction and facilitated polyplex dissociation in the cell, which increased transfection efficiency of the modified polymers compared to the parent polymer. Polymers with buffering capacities greater than 50% and less than 80% relative to PEI, showed higher transfection efficiencies than PEI. The propionic anhydride modified polymers had appropriate interactions with DNA which provided both DNA compaction and polyplex dissociation. These systems interacted better with the cell membrane because of their slightly higher lipophilicity and formed polyplexes which were less cytotoxic than polyplexes of acetic anhydride modified polymers. Among the vectors tested, 1:0.3 mol/mol PEI:propionic anhydride in a 1:2 w/w DNA:polymer composition provided the best transfection system with improved transfection efficiency and reduced cytotoxicity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Field studies were carried out on the water and sediment dynamics in the tropical, macro-tidal, Daly Estuary. The estuary is shallow, very-turbid, about 100 km long, and the entrance is funnel-shape. In the wet, high flow season, normal tidal ranges can be suppressed in the estuary, depending on inflow rates, and freshwater becomes dominant up to the mouth. At that time a fraction of the fine sediment load is exported offshore as a bottom-tagging nepheloid layer after the sediment falls out of suspension of the thin, near-surface, river plume. The remaining fraction and the riverine coarse sediment form a large sediment bar 10 km long, up to 6 m in height and extending across the whole width of the channel near the mouth. This bar, as well as shoals in the estuary, partially pond the mid- to upper-estuary. This bar builds up from the deposition of riverine sediment during a wet season with high runoff and can raise mean water level by up to 2 m in the upper estuary in the low flow season. This ponding effect takes about three successive dry years to disappear by the sediment forming the bar being redistributed all over the estuary by tidal pumping of fine and coarse sediment in the dry season, which is the low flow season. The swift reversal of the tidal currents from ebb to flood results in macro-turbulence that lasts about 20 min. Bed load transport is preferentially landward and occurs only for water currents greater than 0.6 m s(-1). This high value of the threshold velocity suggests that the sand may be cemented by the mud. The Daly Estuary thus is a leaky sediment trap with an efficiency varying both seasonally and inter-annually. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multiple factor parametrization is described to permit the efficient calculation of collision efficiency (E) between electrically charged aerosol particles and neutral cloud droplets in numerical models of cloud and climate. The four-parameter representation summarizes the results obtained from a detailed microphysical model of E, which accounts for the different forces acting on the aerosol in the path of falling cloud droplets. The parametrization's range of validity is for aerosol particle radii of 0.4 to 10 mu m, aerosol particle densities of I to 2.0 g cm(-3), aerosol particle charges from neutral to 100 elementary charges and drop radii from 18.55 to 142 mu m. The parametrization yields values of E well within an order of magnitude of the detailed model's values, from a dataset of 3978 E values. Of these values 95% have modelled to parametrized ratios between 0.5 and 1.5 for aerosol particle sizes ranging between 0.4 and 2.0 mu m, and about 96% in the second size range. This parametrization speeds up the calculation of E by a factor of similar to 10(3) compared with the original microphysical model, permitting the inclusion of electric charge effects in numerical cloud and climate models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nitrogen trifluoride (NF3) is an industrial gas used in the semiconductor industry as a plasma etchant and chamber cleaning gas. NF3 is an alternative to other potent greenhouse gases and its usage has increased markedly over the last decade. In recognition of its increased relevance and to aid planning of future usage we report an updated radiative efficiency and global warming potentials for NF3. Laboratory measurements give an integrated absorption cross section of 7.04 x 10(-17) cm(2) molecule(-1) cm(-1) over the spectral region 200 2000 cm(-1). The radiative efficiency is calculated to be 0.21 Wm(-2) ppbv(-1) and the 100 year GWP, relative to carbon dioxide, is 17200. These values are approximately 60% higher than previously published estimates, primarily reflecting the higher infrared absorption cross-sections reported here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An optimized protocol has been developed for the efficient and rapid genetic modification of sugar beet (Beta vulgaris L.). A polyethylene glycol-mediated DNA transformation technique could be applied to protoplast populations enriched specifically for a single totipotent cell type derived from stomatal guard cells, to achieve high transformation frequencies. Bialaphos resistance, conferred by the pat gene, produced a highly efficient selection system. The majority of plants were obtained within 8 to 9 weeks and were appropriate for plant breeding purposes. All were resistant to glufosinate-ammonium-based herbicides. Detailed genomic characterization has verified transgene integration, and progeny analysis showed Mendelian inheritance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we examine sources of technical efficiency for rice farming in Bangladesh. The motivation for the analysis is the need to close the rice yield gap to enable food security. We employ the DEA double bootstrap of Simar and Wilson (2007) to estimate and explain technical efficiency. This technique overcomes severe limitations inherent in using the two-stage DEA approach commonly employed in the efficiency literature. From a policy perspective our results show that potential efficiency gains to reduce the yield gap are greater than previously found. Statistically positive influences on technical efficiency are education, extension and credit, with age being a negative influence.