71 resultados para power Consumption


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this letter, we consider wireless powered communication networks which could operate perpetually, as the base station (BS) broadcasts energy to the multiple energy harvesting (EH) information transmitters. These employ “harvest then transmit” mechanism, as they spend all of their energy harvested during the previous BS energy broadcast to transmit the information towards the BS. Assuming time division multiple access (TDMA), we propose a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS. The proposed scheme significantly outperforms “state of the art” schemes that employ only the optimal time allocation. If a single EH transmitter is considered, we generalize the optimal solutions for the case of fixed circuit power consumption, which refers to a much more practical scenario.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The string mode of operation for an electron beam ion source uses axially oscillating electrons in order to reduce power consumption, also simplifying the construction by omitting the collector with cooling requirements and has been called electron string ion source (ESIS). We have started a project (supported by INTAS and GSI) to use Schottky field emitting cathode tips for generating the electron string. The emission from these specially conditioned tips is higher by orders of magnitude than the focused Brillouin current density at magnetic fields of some Tesla and electron energies of some keV. This may avoid the observed instabilities in the transition from axially oscillating electrons to the string state of the electron plasma, opening a much wider field of possible operating parameters for an ESIS. Besides the presentation of the basic features, we emphasize in this paper a method to avoid damaging of the field, emission tip by backstreaming ions. (C) 2008 American Institute of Physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of bit-level systolic array circuits as building blocks in the construction of larger word-level systolic systems is investigated. It is shown that the overall structure and detailed timing of such systems may be derived quite simply using the dependence graph and cut-set procedure developed by S. Y. Kung (1988). This provides an attractive and intuitive approach to the bit-level design of many VLSI signal processing components. The technique can be applied to ripple-through and partly pipelined circuits as well as fully systolic designs. It therefore provides a means of examining the relative tradeoff between levels of pipelining, chip area, power consumption, and throughput rate within a given VLSI design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a novel configurable content addressable memory (CCAM) cell is proposed, to increase the flexibility of embedded CAMs for SoC employment. It can be easily configured as a Binary CAM (BiCAM) or Ternary CAM (TCAM) without significant penalty of power consumption or searching speed. A 64x128 CCAM array has been built and verified through simulation. ©2007 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper examines the applicability of a digital manufacturing framework to the implementation of a Value Driven Design (VDD) approach for the development of a stiffened composite panel. It presents a means by which environmental considerations can be integrated with conventional product and process design drivers within a customized, digital environment. A composite forming process is used as an exemplar for the work which creates a collaborative environment for the integration of more traditional design drivers with parameters related to manufacturability as well as more sustainable processes and products. The environmental stakeholder is introduced to the VDD process through a customized product/process/resource (PPR) environment where application specific power consumption and material waste data has been measured and characterised in the process design interface. This allows the manufacturing planner to consider power consumption as a concurrent design driver and the inclusion of energy as a parameter in a VDD approach to the development of efficiently manufactured, sustainable transport systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes an optimisation of the adaptive Gaussian mixture background model that allows the deployment of the method on processors with low memory capacity. The effect of the granularity of the Gaussian mean-value and variance in an integer-based implementation is investigated and novel updating rules of the mixture weights are described. Based on the proposed framework, an implementation for a very low power consumption micro-controller is presented. Results show that the proposed method operates in real time on the micro-controller and has similar performance to the original model. © 2012 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless sensor node platforms are very diversified and very constrained, particularly in power consumption. When choosing or sizing a platform for a given application, it is necessary to be able to evaluate in an early design stage the impact of those choices. Applied to the computing platform implemented on the sensor node, it requires a good understanding of the workload it must perform. Nevertheless, this workload is highly application-dependent. It depends on the data sampling frequency together with application-specific data processing and management. It is thus necessary to have a model that can represent the workload of applications with various needs and characteristics. In this paper, we propose a workload model for wireless sensor node computing platforms. This model is based on a synthetic application that models the different computational tasks that the computing platform will perform to process sensor data. It allows to model the workload of various different applications by tuning data sampling rate and processing. A case study is performed by modeling different applications and by showing how it can be used for workload characterization. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There has been significant interest in retrodirective antennas, especially considering the wealth of applications that could be significantly enhanced, or created, by the use of such technology. There is enormous potential for retrodirective antennas where complicated automatic tracking systems would benefit from being replaced by much simpler systems. Retrodirective array technology offers one solution pathway since it can offer extremely fast tracking with relatively simple circuitry. Retrodirective or self-steering arrays are suited for low radio frequency (RF) power mobile terminal use particularly on or between un-stabilised vehicles. In this type of operational scenario, high degrees of relative movement are expected, and power consumption and weight of the antenna must be kept to a minimum. In this study, the authors give a brief historical review of basic retrodirective technology and elaborate on some recent developments at Queens University of Belfast associated with retrodirective antenna technology in relation to, two-way communications, ultrafast RADAR, microwave imaging, spatial power transmission, mitigation of multipath effects and spatial encryption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cognitive radio network is defined as an intelligent wireless communication network that should be able to adaptively reconfigure its communication parameters to meet the demands of the transmission network or the user. In this context one possible way to utilize unused licensed spectrum without interfering with incumbent users is through spectrum sensing. Due to channel uncertainties, single cognitive (opportunistic) user cannot make a decision reliably and hence collaboration among multiple users is often required. Here collaboration among large number of users tends to increase power consumption and introduces large communication overheads. In this paper, the number of collaborating users is optimized in order to maximize the probability of detection for any given power budget in a cognitive radio network, while satisfying constraints on the false alarm probability. We show that for the maximum probability of detection, collaboration of only a subset of available opportunistic users is required. The robustness of our proposed spectrum sensing algorithm is also examined under flat Rayleigh fading and AWGN channel conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic Voltage and Frequency Scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors Near the transistor Threshold Volt- age (NTV) is a better alternative to DVFS for break- ing the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance- constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35% to 67%, while compromising neither correctness nor performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Polymer extrusion is regarded as an energy-intensive production process, and the real-time monitoring of both energy consumption and melt quality has become necessary to meet new carbon regulations and survive in the highly competitive plastics market. The use of a power meter is a simple and easy way to monitor energy, but the cost can sometimes be high. On the other hand, viscosity is regarded as one of the key indicators of melt quality in the polymer extrusion process. Unfortunately, viscosity cannot be measured directly using current sensory technology. The employment of on-line, in-line or off-line rheometers is sometimes useful, but these instruments either involve signal delay or cause flow restrictions to the extrusion process, which is obviously not suitable for real-time monitoring and control in practice. In this paper, simple and accurate real-time energy monitoring methods are developed. This is achieved by looking inside the controller, and using control variables to calculate the power consumption. For viscosity monitoring, a ‘soft-sensor’ approach based on an RBF neural network model is developed. The model is obtained through a two-stage selection and differential evolution, enabling compact and accurate solutions for viscosity monitoring. The proposed monitoring methods were tested and validated on a Killion KTS-100 extruder, and the experimental results show high accuracy compared with traditional monitoring approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study explores the application of a two-stage electrokinetic washing system on remediation of lead (Pb) contaminated soil. The process involved an initial soil washing, followed by an electrokinetic process. The use of electrokinetic process in soil washing not only provided additional driving force for transporting the desorbed Pb away from the soil but also reduced the high usage of wash solution. In this study, the effect of NaNO3, HNO3, citric acid and EDTA as wash solutions on two-stage electrokinetic washing system were evaluated. The results revealed that a two-stage electrokinetic washing process enhanced Pb removal efficiency by 2.52-9.08% and 4.98-20.45% in comparison to a normal electrokinetic process and normal washing process, respectively. Low pH and adequate current were the most important criteria in the removal process as they provided superior desorption and transport properties. The effect of chelating by EDTA was less dominant as it delayed the removal process by forming a transport loop in anode region between Pb ion and complexes. HNO3 was not suitable as wash solution in electrokinetic washing in spite of offering highest removal efficiency as it caused pH fluctuation in the cathode chamber, corroded graphite anode and showed high power consumption. In contrast, citric acid not only yielded high Pb removal efficiency with low power consumption but also maintained a low soil: solution ratio of 1 g: <1 mL, stable pH and electrode integrity. Possible transport mechanisms for Pb under each wash solution are also discussed in this work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on colocated or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today’s conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phasedrifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make the circuit power increase as p N, instead of linearly, by careful circuit-aware system design.