993 resultados para parallel efficiency


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The real-time parallel computation of histograms using an array of pipelined cells is proposed and prototyped in this paper with application to consumer imaging products. The array operates in two modes: histogram computation and histogram reading. The proposed parallel computation method does not use any memory blocks. The resulting histogram bins can be stored into an external memory block in a pipelined fashion for subsequent reading or streaming of the results. The array of cells can be tuned to accommodate the required data path width in a VLSI image processing engine as present in many imaging consumer devices. Synthesis of the architectures presented in this paper in FPGA are shown to compute the real-time histogram of images streamed at over 36 megapixels at 30 frames/s by processing in parallel 1, 2 or 4 pixels per clock cycle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The diversification of life involved enormous increases in size and complexity. The evolutionary transitions from prokaryotes to unicellular eukaryotes to metazoans were accompanied by major innovations inmetabolicdesign.Hereweshowthat thescalingsofmetabolic rate, population growth rate, and production efficiency with body size have changed across the evolutionary transitions.Metabolic rate scales with body mass superlinearly in prokaryotes, linearly in protists, and sublinearly inmetazoans, so Kleiber’s 3/4 power scaling law does not apply universally across organisms. The scaling ofmaximum population growth rate shifts from positive in prokaryotes to negative in protists and metazoans, and the efficiency of production declines across these groups.Major changes inmetabolic processes duringtheearlyevolutionof life overcameexistingconstraints, exploited new opportunities, and imposed new constraints. The 3.5 billion year history of life on earth was characterized by

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A UK field experiment compared a complete factorial combination of three backgrounds (cvs Mercia, Maris Huntsman and Maris Widgeon), three alleles at the Rht-B1 locus as Near Isogenic Lines (NILs: rht-B1a (tall), Rht-B1b (semi-dwarf), Rht-B1c (severe dwarf)) and four nitrogen (N) fertilizer application rates (0, 100, 200 and 350 kg N/ha). Linear+exponential functions were fitted to grain yield (GY) and nitrogen-use efficiency (NUE; GY/available N) responses to N rate. Averaged over N rate and background Rht-B1b conferred significantly (P<0.05) greater GY, NUE, N uptake efficiency (NUpE; N in above ground crop / available N) and N utilization efficiency (NUtEg; GY / N in above ground crop) compared with rht-B1a and Rht-B1c. However the economically optimal N rate (Nopt) for N:grain price ratios of 3.5:1 to 10:1 were also greater for Rht-B1b, and because NUE, NUpE and NUtE all declined with N rate, Rht-Blb failed to increase NUE or its components at Nopt. The adoption of semi-dwarf lines in temperate and humid regions, and the greater N rates that such adoption justifies economically, greatly increases land-use efficiency, but not necessarily, NUE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the numerical accuracy, computational cost, and memory requirements of self-consistent field theory (SCFT) calculations when the diffusion equations are solved with various pseudo-spectral methods and the mean field equations are iterated with Anderson mixing. The different methods are tested on the triply-periodic gyroid and spherical phases of a diblock-copolymer melt over a range of intermediate segregations. Anderson mixing is found to be somewhat less effective than when combined with the full-spectral method, but it nevertheless functions admirably well provided that a large number of histories is used. Of the different pseudo-spectral algorithms, the 4th-order one of Ranjan, Qin and Morse performs best, although not quite as efficiently as the full-spectral method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The assessment of building energy efficiency is one of the most effective measures for reducing building energy consumption. This paper proposes a holistic method (HMEEB) for assessing and certifying building energy efficiency based on the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach. HMEEB has three main features: (i) it provides both a method to assess and certify building energy efficiency, and exists as an analytical tool to identify improvement opportunities; (ii) it combines a wealth of information on building energy efficiency assessment, including identification of indicators and a weighting mechanism; and (iii) it provides a method to identify and deal with inherent uncertainties within the assessment procedure. This paper demonstrates the robustness, flexibility and effectiveness of the proposed method, using two examples to assess the energy efficiency of two residential buildings, both located in the ‘Hot Summer and Cold Winter’ zone in China. The proposed certification method provides detailed recommendations for policymakers in the context of carbon emission reduction targets and promoting energy efficiency in the built environment. The method is transferable to other countries and regions, using an indicator weighting system to modify local climatic, economic and social factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid urbanisation in China has resulted in great demands for energy, resources and pressure on the environment. The progress in China's development is considered in the context of energy efficiency in the built environment, including policy, technology and implementation. The key research challenges and opportunities are identified for delivering a low carbon built environment. The barriers include the existing traditional sequential design process, the lack of integrated approaches, and insufficient socio-technical knowledge. A proposed conceptual systemic model of an integrated approach identifies research opportunities. The organisation of research activities should be initiated, operated, and managed in a collaborative way among policy makers, professionals, researchers and stakeholders. More emphasis is needed on integrating social, economic and environmental impacts in the short, medium and long terms. An ideal opportunity exists for China to develop its own expertise, not merely in a technical sense but in terms of vision and intellectual leadership in order to flourish in global collaborations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an outlook on the climate system thermodynamics. First, we construct an equivalent Carnot engine with efficiency and frame the Lorenz energy cycle in a macroscale thermodynamic context. Then, by exploiting the second law, we prove that the lower bound to the entropy production is times the integrated absolute value of the internal entropy fluctuations. An exergetic interpretation is also proposed. Finally, the controversial maximum entropy production principle is reinterpreted as requiring the joint optimization of heat transport and mechanical work production. These results provide tools for climate change analysis and for climate models’ validation.