871 resultados para Eletric power consumption - Reduction
Resumo:
This work presents a triple-mode sigma-delta modulator for three wireless standards namely GSM/WCDMA and Bluetooth. A reconfigurable ADC has been used to meet the wide bandwidth and high dynamic range requirements of the multi-standard receivers with less power consumption. A highly linear sigma-delta ADC which has reduced sensitivity to circuit imperfections has been chosen in our design. This is particularly suitable for wide band applications where the oversampling ratio is low. Simulation results indicate that the modulator achieves a peak SNDR of 84/68/68 dB over a bandwidth of 0.2/3.84/1.5 MHz with an oversampling ratio 128/8/8 in GSM/WCDMA/Bluetooth modes respectively
Resumo:
This paper presents a cascaded 2-2-2 reconfigurable sigma-delta modulator that can handle GSM, WCDMA and WLAN standards. The modulator makes use of a low-distortion swing suppression topology which is highly suitable for wide band applications. In GSM mode, only the first stage (2nd order Σ-Δ ADC) is turned on to achieve 88dB dynamic range with oversampling ratio of 160 for a bandwidth of 200KHz; in WCDMA mode a 2-2 cascaded structure (4th order) is turned on with 1-bit in the first stage and 2-bit in the second stage to achieve 74 dB dynamic range with oversampling ratio of 16 for a bandwidth of 2MHz and a 2-2-2 cascaded MASH architecture with a 4-bit in the last stage to achieve a dynamic range of 58dB for a bandwidth of 20MHz. The novelty lies in the fact that unused blocks of second and third stages can be switched off taking into considerations like power consumption. The modulator is designed in TSMC 0.18um CMOS technology and operates at 1.8 supply voltage.
Resumo:
The dynamic power requirement of CMOS circuits is rapidly becoming a major concern in the design of personal information systems and large computers. In this work we present a number of new CMOS logic families, Charge Recovery Logic (CRL) as well as the much improved Split-Level Charge Recovery Logic (SCRL), within which the transfer of charge between the nodes occurs quasistatically. Operating quasistatically, these logic families have an energy dissipation that drops linearly with operating frequency, i.e., their power consumption drops quadratically with operating frequency as opposed to the linear drop of conventional CMOS. The circuit techniques in these new families rely on constructing an explicitly reversible pipelined logic gate, where the information necessary to recover the energy used to compute a value is provided by computing its logical inverse. Information necessary to uncompute the inverse is available from the subsequent inverse logic stage. We demonstrate the low energy operation of SCRL by presenting the results from the testing of the first fully quasistatic 8 x 8 multiplier chip (SCRL-1) employing SCRL circuit techniques.
Resumo:
The electronics industry is encountering thermal challenges and opportunities with lengthscales comparable to or much less than one micrometer. Examples include nanoscale phonon hotspots in transistors and the increasing temperature rise in onchip interconnects. Millimeter-scale hotspots on microprocessors, resulting from varying rates of power consumption, are being addressed using two-phase microchannel heat sinks. Nanoscale thermal data storage technology has received much attention recently. This paper provides an overview of these topics with a focus on related research at Stanford University.
Resumo:
L’objecte d’aquest treball ha estat avaluar el consum d’energia elèctrica de la industria BECSA, situada al municipi de Palol de Revardit i dedicada a l’elaboració de pernil curat preparat per a ser llescat industrialment. En primer lloc es va estudiar el procés productiu de la indústria i la maquinària, per tal de determinar les potencies activa i reactiva de cadascuna de les màquines. Amb les potències per una banda, i l’estudi dels temps de treball de les màquines per l’altra, es van determinar els consums d’energia activa i reactiva que suposava el procés. A partir de l’anàlisi de les dades obtingudes, les factures elèctriques, les tarifes i complements per destinació horària, es van proposar una sèrie de millores per a poder disminuir els costos del consum d’energia elèctrica de la indústria
Resumo:
La desintegració és una etapa important en la recuperació de paper vell, ja que té importants conseqüències en consum d'energia i en el comportament de les etapes posteriors. Per això els objectius es centren en analitzar la desintegració des del punt de vista del temps de desintegració, els aspectes energètics, modelització de la màquina de desintegració utilitzada i anàlisi dels factors de cisallament calculats com a mesura global de les forces implicades en la desintegració. Els autors que hi han treballat donen diferents explicacions a aquestes forces. Fins avui només s'ha pogut avaluar qualitativament la influència que tenen cada un dels mecanismes en el temps necessari per a desintegrar i en el consum energètic. Les característiques reològiques de les suspensions papereres, i el seu comportament no newtonià tenen una clara influència en el consum energètic i les forces de desfibrat en el desintegrador. Els experiments de desintegració s'han realitzat en un púlper convencional, amb tres tipus de paper recuperat: paper estucat d'alta qualitat imprès offset (PQ), paper revista estucat imprès en color (PR), paper blanc imprès en impresora làsser (PF). Anàlisi del temps de desintegració Per cada un del papers estudiats (PQ, PR i PF), les fraccions màssiques des de 0.06 fins a la màxima que estat possible per cada paper (de 0.14 a 0.18), i a dues velocitats d'agitació diferents, s'ha determinat el temps de desintegració (tD) fins a aconseguir un índex de Sommerville de 0.01%. S'obté que en augmentar la fracció màssica disminueix potencialment el temps de desintegració. S'ha estudiat la velocitat de desintegració, la producció teòrica del púlper en cada cas, i la seva relació amb les forces d'impacte i de fregament que produeixen la desintegració. Aspectes energètics El consum específic d'energia (SEC), definit com l'energia consumida per a desintegrar 1 kg de paper recuperat, disminueix molt en augmentar Xm, ja que a més de disminuir l'energia consumida en cada desintegració, el contingut en paper és més elevat. Pel disseny de desintegradors, cal tenir en compte que en augmentar Xm i en augmentar la velocitat, sempre augmenta la potència consumida. Però així com els beneficis de treballar a Xm alt són de 10 vegades en termes de SEC i de producció, l'augment de potència és només de l'ordre de 2 vegades la necessària respecte de la Xm baixa. Viscositat aparent i energia de fluidització S'estudia la relació entre el temps de desintegració, les forces de fregament i els valors de viscositat aparent de la bibliografia. Per cada paper i velocitat s'ha observat que el consum específic d'energia disminueix en funció de la viscositat aparent. Reologia del púlper Utilitzant el mètode de Metzner i Otto (1957) per determinar la viscositat aparent mitjana de les suspensions papereres, modificat per Roustan, s'ha caracteritzat el pulper mitjançant el model: Np= K· Rex·Fry S'han utilitzat dissolucions de glicerina com a fluid newtonià per a calcular les constants d'ajust, i a partir d'aquí, aïllar la viscositat aparent en funció de la potència neta i els paràmetres d'agitació. La viscositat aparent, d'acord amb Fabry (1999) es substitueix pel concepte de factor de cisallament. Factor de cisallament Calculat el factor de cisallament per a cada paperot i condicions d'agitació, s'ha relacionat amb Xm, SEC, tD, consum de potència, potència instal·lada i fracció cel·lulòsica. El factor de cisallament és un paràmetre útil per a quantificar les forces globals implicades en la desintegració.
Resumo:
Deep Brain Stimulator devices are becoming widely used for therapeutic benefits in movement disorders such as Parkinson's disease. Prolonging the battery life span of such devices could dramatically reduce the risks and accumulative costs associated with surgical replacement. This paper demonstrates how an artificial neural network can be trained using pre-processing frequency analysis of deep brain electrode recordings to detect the onset of tremor in Parkinsonian patients. Implementing this solution into an 'intelligent' neurostimulator device will remove the need for continuous stimulation currently used, and open up the possibility of demand-driven stimulation. Such a methodology could potentially decrease the power consumption of a deep brain pulse generator.
Resumo:
This paper presents a clocking pipeline technique referred to as a single-pulse pipeline (PP-Pipeline) and applies it to the problem of mapping pipelined circuits to a Field Programmable Gate Array (FPGA). A PP-pipeline replicates the operation of asynchronous micropipelined control mechanisms using synchronous-orientated logic resources commonly found in FPGA devices. Consequently, circuits with an asynchronous-like pipeline operation can be efficiently synthesized using a synchronous design methodology. The technique can be extended to include data-completion circuitry to take advantage of variable data-completion processing time in synchronous pipelined designs. It is also shown that the PP-pipeline reduces the clock tree power consumption of pipelined circuits. These potential applications are demonstrated by post-synthesis simulation of FPGA circuits. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a unique two-stage image restoration framework especially for further application of a novel rectangular poor-pixels detector, which, with properties of miniature size, light weight and low power consumption, has great value in the micro vision system. To meet the demand of fast processing, only a few measured images shifted up to subpixel level are needed to join the fusion operation, fewer than those required in traditional approaches. By maximum likelihood estimation with a least squares method, a preliminary restored image is linearly interpolated. After noise removal via Canny operator based level set evolution, the final high-quality restored image is achieved. Experimental results demonstrate effectiveness of the proposed framework. It is a sensible step towards subsequent image understanding and object identification.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
Tcl/Tk scripting language has become the de-facto standard for EDA tools. This paper explains how to start working with Tcl/Tk using simple examples. Two complete applications are presented to show in more detail the capabilities of the language. In one script average power consumption of a digital system is automated. A second script creates a virtual display driven by the simulation of a graphic card.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Since electric power is an essential element in modern society, this paper analyzes the historic and institutional factors that have contributed to the formation and organization of the Brazilian electric sector, from the time when it started to be used in this country until the end of year 2002. This analysis is based on a linear description of historic facts, giving emphasis to crucial events ¿ or critical incidents, as they were called for the purpose of this paper. As to these happenings, the social actors who played an important role in the development of the Brazilian electric power sector were analyzed. An analytical model based on the theoretical references offered by the Institutional Theory was used. The study also highlights the elements that comprehend the development of the phenomenon in face of the ambivalence existing in a developing country, which is the case of the Brazilian electric power sector. The organizational fields that were established at the time determined by the main crucial incidents presented throughout the length of time covered by this study. The resources that the main social actors involved in the electric power sector may use by are also identified, as well as their main interests and level of influence these actors may have. Several documents were analyzed. The qualitative methodology was used. Also, many semi-structured in-depth interviews of the people who have made the history of this sector for reliability were conducted. Finally, this study includes the main elements that have shaped the institutional model of the Brazilian electric sector. It also characterizes the external environment as the element which has most influenced the sector and has also led its way throughout the different developmental phases, especially with respect to funding. The growing rates of power consumption indicate the need for a constant increase in the supply of electric power to meet the needs of society and economic development. This requires constant investment. Lack of investment is a limiting factor. Not only does it hinder the development of the country but it may also result in very unfortunate mishaps such as electric power rationing, such as the kind we had to endure a while ago.
Resumo:
This work aims to study and analyze strategies and measures to improve energy performance in residential and service buildings, in order to minimize energy losses and energy consumption. Due to the high energy dependence of European Union (EU), including Portugal and Slovenia, and high percentage of energy consumption in the building sector, there was a need to adopt strategies at European level with ambitious goals. This came to force EU - Member States to take measures to achieve the proposed targets for energy consumption reduction. To this end, EU - Member States have adapted the laws to their needs and formed specialized agencies and qualified experts on energy certification, which somehow evaluate buildings according to their performance. In this study, the external characteristics of the building in order to meet its thermal needs and from there to survey the existing and possible constructive solutions to be used at the envelope will be examined, in order to increase comfort and reduce the need of use technical means of air conditioning. The possibility of passive heating and ventilation systems also will be discussed. These techniques are developed in parallel with the deployment and design of the building. In this manner, to reduce the energy consumption, various techniques and technologies exploit natural resources. Thus, appear the more sustainable and efficient buildings, so-called Green Buildings have been appeared. The study ends with the identification of measures used in several buildings, proving the economic return in the medium to long term, as well as the satisfaction of their users.