686 resultados para Processors
Resumo:
A mature Caribbean pine (Pinus caribaea var. hondurensis) silviculture experiment provided initial square spacing treatments of 1.8 m2, 2.4 m2, 3.0 m2 and 3.6 m2 (equal to 3088, 1737, 1111 and 772 stems/ha) that were thinned at age 10 years to 600, 400 and 200 stems/ha, retaining an unthinned control for each initial spacing. The trial was destructively sampled at age of 28 years and discs taken along 8 various stem heights were analysed for variation in basic density and SilviScan wood properties. In addition, the logs from ten stocking × thinning treatments were processed in a sawing study. Results indicate thinning effects were generally more pronounced than initial spacing effects. Fast growing trees produced wood with significantly higher average wood densities and higher average stiffness values. Detailed SilviScan densitometry results obtained radially and at various stem heights enabled construction of tree maps for wood properties, providing insights into the variation in juvenile to mature wood proportion across the initial and post-thinning stocking treatments studied. Dried dressed recovery was strongly related to tree size, and log value decreased consistently from butt to top logs across all treatments. The estimated value per hectare was highest in unthinned plots due to values being multiplied by high stem numbers per hectare. However, a complete economic analysis considering all cost structures is required to investigate the optimal silviculture to maximise economic returns to growers and processors. Improved understanding of the relationship between initial spacing, post-thinning stocking and wood and end-product quality should help to customize future forest management strategies required to produce better quality wood and wood products.
Resumo:
A global recursive bisection algorithm is described for computing the complex zeros of a polynomial. It has complexityO(n 3 p) wheren is the degree of the polynomial andp the bit precision requirement. Ifn processors are available, it can be realized in parallel with complexityO(n 2 p); also it can be implemented using exact arithmetic. A combined Wilf-Hansen algorithm is suggested for reduction in complexity.
Resumo:
Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.
Resumo:
We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.
Resumo:
A commercial issue currently facing native plant food producers and food processors, and identified by the industry itself, is that of delivering quality products consistently and at reasonable cost to end users based on a sound food technology and nutrition platform. A literature survey carried out in July 2001 by the DPI&F’s Centre for Food Technology, Brisbane in collaboration with the University of Queensland to collect the latest information at that time on the functional food market as it pertained to native food plants, indicated that little or no work had been published on this topic. This project addresses two key RIRDC sub program strategies: to identify and evaluate processes or products with prospects of commercial viability and to assist in the development of integrated production, harvesting, processing and marketing systems. This project proposal also reflects a key RIRDC R&D issue for 2002-2003; that of linking with prospective members of the value chain. The purpose of this project was to obtain chemical data on the post harvest stability of functional nutritional components (bio actives) in commercially available, hand harvested bush tomato and Kakadu plum. The project concentrated on evaluating bioactive stability as a measure of ingredient quality.
Resumo:
Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.
Resumo:
This paper reports the design of an input-triggered polymorphic ASIC for H.264 baseline decoder. Hardware polymorphism is achieved by selectively reusing hardware resources at system and module level. Complete design is done using ESL design tools following a methodology that maintains consistency in testing and verification throughout the design flow. The proposed design can support frame sizes from QCIF to 1080p.
Resumo:
There has recently been a rapidly increasing interest in solar powered UAVs. With the emergence of high power density batteries, long range and low-power micro radio devices, airframes, and powerful micro-processors and motors, small/micro UAVs have become applicable in civilian applications such as remote sensing, mapping, traffic monitoring, search and rescue. The Green Falcon UAV is an innovative project from Queensland University of Technology and has been developed and tested during these past years. It comprises a wide range of subsystems to be analyses and studied such as Solar Panel Cells, Gas sensor, Aerodynamics of the wing and others. Previous test however, resulted in damage to the solar cells and some of the subsystems including motor and ESC. This report describes the repair and verification process followed to improve the efficiency of the Green Falcon UAV. The report shows some of the results obtained in previous static and flight tests as well as some of recommendations.
Resumo:
Run-time interoperability between different applications based on H.264/AVC is an emerging need in networked infotainment, where media delivery must match the desired resolution and quality of the end terminals. In this paper, we describe the architecture and design of a polymorphic ASIC to support this. The H.264 decoding flow is partitioned into modules, such that the polymorphic ASIC meets the design goals of low-power, low-area, high flexibility, high throughput and fast interoperability between different profiles and levels of H.264. We demonstrate the idea with a multi-mode decoder that can decode baseline, main and high profile H.264 streams and can interoperate at run.time across these profiles. The decoder is capable of processing frame sizes of up to 1024 times 768 at 30 fps. The design synthesized with UMC 0.13 mum technology, occupies 250 k gates and runs at 100 MHz.
Resumo:
In modern wireline and wireless communication systems, Viterbi decoder is one of the most compute intensive and essential elements. Each standard requires a different configuration of Viterbi decoder. Hence there is a need to design a flexible reconfigurable Viterbi decoder to support different configurations on a single platform. In this paper we present a reconfigurable Viterbi decoder which can be reconfigured for standards such as WCDMA, CDMA2000, IEEE 802.11, DAB, DVB, and GSM. Different parameters like code rate, constraint length, polynomials and truncation length can be configured to map any of the above mentioned standards. Our design provides higher throughput and scalable power consumption in various configuration of the reconfigurable Viterbi decoder. The power and throughput can also be optimized for different standards.
Resumo:
A polymorphic ASIC is a runtime reconfigurable hardware substrate comprising compute and communication elements. It is a ldquofuture proofrdquo custom hardware solution for multiple applications and their derivatives in a domain. Interoperability between application derivatives at runtime is achieved through hardware reconfiguration. In this paper we present the design of a single cycle Network on Chip (NoC) router that is responsible for effecting runtime reconfiguration of the hardware substrate. The router design is optimized to avoid FIFO buffers at the input port and loop back at output crossbar. It provides virtual channels to emulate a non-blocking network and supports a simple X-Y relative addressing scheme to limit the control overhead to 9 bits per packet. The 8times8 honeycomb NoC (RECONNECT) implemented in 130 nm UMC CMOS standard cell library operates at 500 MHz and has a bisection bandwidth of 28.5 GBps. The network is characterized for random, self-similar and application specific traffic patterns that model the execution of multimedia and DSP kernels with varying network loads and virtual channels. Our implementation with 4 virtual channels has an average network latency of 24 clock cycles and throughput of 62.5% of the network capacity for random traffic. For application specific traffic the latency is 6 clock cycles and throughput is 87% of the network capacity.
Resumo:
Computational docking of ligands to protein structures is a key step in structure-based drug design. Currently, the time required for each docking run is high and thus limits the use of docking in a high-throughput manner, warranting parallelization of docking algorithms. AutoDock, a widely used tool, has been chosen for parallelization. Near-linear increases in speed were observed with 96 processors, reducing the time required for docking ligands to HIV-protease from 81 min, as an example, on a single IBM Power-5 processor ( 1.65 GHz), to about 1 min on an IBM cluster, with 96 such processors. This implementation would make it feasible to perform virtual ligand screening using AutoDock.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.
Resumo:
The aim of this report is to discuss the role of the relationship type and communication in two Finnish food chains, namely the pig meat-to-sausage (pig meat chain) and the cereal-to-rye bread (rye chain) chains. Furthermore, the objective is to examine those factors influencing the choice of a relationship type and the sustainability of a business relationship. Altogether 1808 questionnaires were sent to producers, processors and retailers operating in these two chains of which 224 usable questionnaires were returned (the response rate being 12.4%). The great majority of the respondents (98.7%) were small businesses employing less than 50 people. Almost 70 per cent of the respondents were farmers. In both chains, formal contracts were stated to be the most important relationship type used with business partners. Although for many businesses written contracts are a common business practice, the essential role of the contracts was the security they provide regarding the demand/supply and quality issues. Relative to the choice of the relationship types, the main difference between the two chains emerged especially with the prevalence of spot markets and financial participation arrangements. The usage of spot markets was significantly more common in the rye chain when compared to the pig meat chain, while, on the other hand, financial participation arrangements were much more common among the businesses in the pig meat chain than in the rye chain. Furthermore, the analysis showed that most of the businesses in the pig meat chain claimed not to be free to choose the relationship type they use. Especially membership in a co-operative and practices of a business partner were mentioned as the reasons limiting this freedom of choice. The main business relations in both chains were described as having a long-term orientation and being based on formal written contracts. Typical for the main business relationships was also that they are not based on the existence of the key persons only; the relationship would remain even if the key people left the business. The quality of these relationships was satisfactory in both chains and across all the stakeholder groups, though the downstream processors and the retailers had a slightly more positive view on their main business partners than the farmers and the upstream processors. The businesses operating in the pig meat chain seemed also to be more dependent on their main business relations when compared to the businesses in the rye chain. Although the communication means were rather similar in both chains (the phone being the most important), there was some variation between the chains concerning the communication frequency necessary to maintain the relationship with the main business partner. In short, the businesses in the pig meat chain seemed to appreciate more frequent communication with their main business partners when compared to the businesses in the rye chain. Personal meetings with the main business partners were quite rare in both chains. All the respondent groups were, however, fairly satisfied with the communication frequency and information quality between them and the main business partner. The business cultures could be argued to be rather hegemonic among the businesses both in the pig meat and rye chains. Avoidance of uncertainty, appreciation of long-term orientation and independence were considered important factors in the business cultures. Furthermore, trust, commitment and satisfaction in business partners were thought to be essential elements of business operations in all the respondent groups. In order to investigate which factors have an effect on the choice of a relationship type, several hypotheses were tested by using binary and multinomial logit analyses. According to these analyses it could be argued that avoidance of uncertainty and risk has a certain effect on the relationship type chosen, i.e. the willingness to avoid uncertainty increases the probability to choose stable relationships, like repeated market transactions and formal written contracts, but not necessary those, which require high financial commitment (like financial participation arrangements). The probability of engaging in financial participation arrangements seemed to increase with long-term orientation. The hypotheses concerning the sustainability of the economic relations were tested by using structural equation model (SEM). In the model, five variables were found to have a positive and statistically significant impact on the sustainable economic relationship construct. Ordered relative to their importance, those factors are: (i) communication quality, (ii) personal bonds, (iii) equal power distribution, (iv) local embeddedness and (v) competition.
Resumo:
he growth of high-performance application in computer graphics, signal processing and scientific computing is a key driver for high performance, fixed latency; pipelined floating point dividers. Solutions available in the literature use large lookup table for double precision floating point operations.In this paper, we propose a cost effective, fixed latency pipelined divider using modified Taylor-series expansion for double precision floating point operations. We reduce chip area by using a smaller lookup table. We show that the latency of the proposed divider is 49.4 times the latency of a full-adder. The proposed divider reduces chip area by about 81% than the pipelined divider in [9] which is based on modified Taylor-series.