932 resultados para Implementation cost
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Realising memory intensive applications such as image and video processing on FPGA requires creation of complex, multi-level memory hierarchies to achieve real-time performance; however commerical High Level Synthesis tools are unable to automatically derive such structures and hence are unable to meet the demanding bandwidth and capacity constraints of these applications. Current approaches to solving this problem can only derive either single-level memory structures or very deep, highly inefficient hierarchies, leading in either case to one or more of high implementation cost and low performance. This paper presents an enhancement to an existing MC-HLS synthesis approach which solves this problem; it exploits and eliminates data duplication at multiple levels levels of the generated hierarchy, leading to a reduction in the number of levels and ultimately higher performance, lower cost implementations. When applied to synthesis of C-based Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications, this enables reductions in Block RAM and Look Up Table (LUT) cost of up to 25%, whilst simultaneously increasing throughput.
Resumo:
This edition of the Bulletin is based on a document prepared by ECLAC and the Technical Coordination Committee of the presidential initiative for Regional Infrastructure Integration in South America (IIRSA), which is composed of the Inter-American Development Bank (IDB), the Andean Development Corporation (ADC) and the Financial Fund for the Development of the River Plate Basin (FONPLATA). The document was prepared as a joint activity on maritime and port security in South America in the context of the IIRSA sectoral integration process in relation to operational systems for maritime transport. It served as an input for the meeting on that subject held by representatives of the authorities of the South American countries in Montevideo, Uruguay, on 22 June 2004.This edition presents the results of the implementation cost assessment for the new compulsory regulations for maritime and port security of the International Maritime Organization (IMO) and also considers the costs of the voluntary measures.
Resumo:
Through the application of novel signal processing techniques we are able to measure physical measurands with both high accuracy and low noise susceptibility. The first interrogation scheme is based upon a CCD spectrometer. We compare different algorithms for resolving the Bragg wavelength from a low resolution discrete representation of the reflected spectrum, and present optimal processing methods for providing a high integrity measurement from the reflection image. Our second sensing scheme uses a novel network of sensors to measure the distributive strain response of a mechanical system. Using neural network processing methods we demonstrate the measurement capabilities of a scalable low-cost fibre Bragg grating sensor network. This network has been shown to be comparable with the performance of existing fibre Bragg grating sensing techniques, at a greatly reduced implementation cost.
Resumo:
Purpose of review: To describe articles since January 2013 that include information on how costs change with infection prevention efforts. Recent findings: Three articles described only the costs imposed by nosocomial infection and so provided limited information about whether or not infection prevention efforts should be changed. One article was found that described the costs of supplying alcohol-based hand run in low-income countries. Eight articles showed the extra costs and cost savings from changing infection prevention programmes and discussed the health benefits. All concluded that the changes are economically worthwhile. There was a systematic review of the costs of methicillin-resistant Staphylococcus aureus control programmes and a methods article for how to make cost estimates for infection prevention programmes. Summary: The balance has shifted away from studies that report the high cost of nosocomial infections toward articles that address the value for money of infection prevention. This is good as simply showing a disease is high cost does not inform decisions to reduce it. More research, done well, on the costs of implementation, cost savings and change to health benefits in this area needs to be done as many gaps exist in our knowledge.
Resumo:
Using the spatial modulation approach, where only one transmit antenna is active at a time, we propose two transmission schemes for two-way relay channel using physical layer network coding with space time coding using coordinate interleaved orthogonal designs (CIODs). It is shown that using two uncorrelated transmit antennas at the nodes, but using only one RF transmit chain and space-time coding across these antennas can give a better performance without using any extra resources and without increasing the hardware implementation cost and complexity. In the first transmission scheme, two antennas are used only at the relay, adaptive network coding (ANC) is employed at the relay and the relay transmits a CIOD space time block code (STBC). This gives a better performance compared to an existing ANC scheme for two-way relay channel which uses one antenna each at all the three nodes. It is shown that for this scheme at high SNR the average end-to-end symbol error probability (SEP) is upper bounded by twice the SEP of a point-to-point fading channel. In the second transmission scheme, two transmit antennas are used at all the three nodes, CIOD STBCs are transmitted in multiple access and broadcast phases. This scheme provides a diversity order of two for the average end-to-end SEP with an increased decoding complexity of O(M-3) for an arbitrary signal set and O(M-2 root M) for square QAM signal set. Simulation results show that the proposed schemes performs better than the existing ANC schemes under perfect and imperfect channel state information.
Resumo:
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.
Resumo:
Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.
Resumo:
The determination of the diameter of an interconnection network is essential in evaluating the performance of the network. Parallelogramic honeycomb torus is an attractive alternative to classical torus network due to smaller vertex degree, and hence, lower implementation cost. In this paper, we present the expression for the diameter of a parallelogramic, honeycomb torus, which extends a known result about rhombic: honeycomb torus. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Sleep has emerged in the past decades as a key process for memory consolidation and restructuring. Given the universality of sleep across cultures, the need to reduce educational inequality, the low implementation cost of a sleep-based pedagogy, and its global scalability, it is surprising that the potential of improved sleep as a means of enhancing school education has remained largely unexploited. Students of various socio-economic status often suffer from sleep deficits. In principle, the optimization of sleep schedules both before and after classes should produce large positive benefits for learning. Here we review the biological and psychological phenomena underlying the cognitive role of sleep, present the few published studies on sleep and learning that have been performed in schools, and discuss potential applications of sleep to the school setting. Translational research on sleep and learning has never seemed more appropriate.
Resumo:
Muitas empresas estão adotando Sistemas ERP devido a várias razões, tais como: decepção com sistemas incompatíveis, incapacidade do Departamento de Tecnologia de Informação em realizar a integração entre os sistemas existentes atualmente na empresa e outros motivos que influenciam diretamente a competitividade da Empresa. Neste contexto, este artigo apresenta as principais características de Sistemas ERP, suas vantagens e desvantagens, bem como os custos envolvidos na sua implementação. Finalmente, as tendências e o futuro de Sistemas ERP são comentados.
Resumo:
The aim of this work was to develop a detailed econometric analysis to compare a constructed wetland system - combined model, and a waste stabilization pond system - facultative pond, as a function of six different sizes of finishing pig farms and two waste management systems - wet and dry. The constructed wetland system using dried waste management showed the best economic results. This finding is due to the low-cost implementation a year both per animal and per kilogram of meat. This system also required the smallest area for waste treatment. The use of stabilization pond with wet waste management system showed a lower implementation cost a year per animal and per kilogram of meat, but it required large areas. The econometric analysis of both systems of wastewater treatment revealed an economy of scale.
Resumo:
This work presents an alternative approach based on neural network method in order to estimate speed of induction motors, using the measurement of primary variables such as voltage and current. Induction motors are very common in many sectors of the industry and assume an important role in the national energy policy. The nowadays methodologies, which are used in diagnosis, condition monitoring and dimensioning of these motors, are based on measure of the speed variable. However, the direct measure of this variable compromises the system control and starting circuit of an electric machinery, reducing its robustness and increasing the implementation costs. Simulation results and experimental data are presented to validate the proposed approach. © 2003-2012 IEEE.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)