991 resultados para Efficient implementation
Resumo:
Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is designed that significantly reduces this variation for a particular pyramidal blurring technique. Moreover, the pyramidal blur algorithm is generalized to allow for a continuous variation of the blur width. Furthermore, an efficient implementation for programmable graphics hardware is presented. The proposed method is named “quasi-convolution pyramidal blurring” since the resulting effect is very close to image blurring based on a convolution filter for many applications.
Resumo:
This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.
Resumo:
The analysis of complex nonlinear systems is often carried out using simpler piecewise linear representations of them. A principled and practical technique is proposed to linearize and evaluate arbitrary continuous nonlinear functions using polygonal (continuous piecewise linear) models under the L1 norm. A thorough error analysis is developed to guide an optimal design of two kinds of polygonal approximations in the asymptotic case of a large budget of evaluation subintervals N. The method allows the user to obtain the level of linearization (N) for a target approximation error and vice versa. It is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), allowing real-time performance of computationally demanding applications. The quality and efficiency of the technique has been measured in detail on two nonlinear functions that are widely used in many areas of scientific computing and are expensive to evaluate.
Resumo:
La planificación de la movilidad sostenible urbana es una tarea compleja que implica un alto grado de incertidumbre debido al horizonte de planificación a largo plazo, la amplia gama de paquetes de políticas posibles, la necesidad de una aplicación efectiva y eficiente, la gran escala geográfica, la necesidad de considerar objetivos económicos, sociales y ambientales, y la respuesta del viajero a los diferentes cursos de acción y su aceptabilidad política (Shiftan et al., 2003). Además, con las tendencias inevitables en motorización y urbanización, la demanda de terrenos y recursos de movilidad en las ciudades está aumentando dramáticamente. Como consecuencia de ello, los problemas de congestión de tráfico, deterioro ambiental, contaminación del aire, consumo de energía, desigualdades en la comunidad, etc. se hacen más y más críticos para la sociedad. Esta situación no es estable a largo plazo. Para enfrentarse a estos desafíos y conseguir un desarrollo sostenible, es necesario considerar una estrategia de planificación urbana a largo plazo, que aborde las necesarias implicaciones potencialmente importantes. Esta tesis contribuye a las herramientas de evaluación a largo plazo de la movilidad urbana estableciendo una metodología innovadora para el análisis y optimización de dos tipos de medidas de gestión de la demanda del transporte (TDM). La metodología nueva realizado se basa en la flexibilización de la toma de decisiones basadas en utilidad, integrando diversos mecanismos de decisión contrariedad‐anticipada y combinados utilidad‐contrariedad en un marco integral de planificación del transporte. La metodología propuesta incluye dos aspectos principales: 1) La construcción de escenarios con una o varias medidas TDM usando el método de encuesta que incorpora la teoría “regret”. La construcción de escenarios para este trabajo se hace para considerar específicamente la implementación de cada medida TDM en el marco temporal y marco espacial. Al final, se construyen 13 escenarios TDM en términos del más deseable, el más posible y el de menor grado de “regret” como resultado de una encuesta en dos rondas a expertos en el tema. 2) A continuación se procede al desarrollo de un marco de evaluación estratégica, basado en un Análisis Multicriterio de Toma de Decisiones (Multicriteria Decision Analysis, MCDA) y en un modelo “regret”. Este marco de evaluación se utiliza para comparar la contribución de los distintos escenarios TDM a la movilidad sostenible y para determinar el mejor escenario utilizando no sólo el valor objetivo de utilidad objetivo obtenido en el análisis orientado a utilidad MCDA, sino también el valor de “regret” que se calcula por medio del modelo “regret” MCDA. La función objetivo del MCDA se integra en un modelo de interacción de uso del suelo y transporte que se usa para optimizar y evaluar los impactos a largo plazo de los escenarios TDM previamente construidos. Un modelo de “regret”, llamado “referencedependent regret model (RDRM)” (modelo de contrariedad dependiente de referencias), se ha adaptado para analizar la contribución de cada escenario TDM desde un punto de vista subjetivo. La validación de la metodología se realiza mediante su aplicación a un caso de estudio en la provincia de Madrid. La metodología propuesta define pues un procedimiento técnico detallado para la evaluación de los impactos estratégicos de la aplicación de medidas de gestión de la demanda en el transporte, que se considera que constituye una herramienta de planificación útil, transparente y flexible, tanto para los planificadores como para los responsables de la gestión del transporte. Planning sustainable urban mobility is a complex task involving a high degree of uncertainty due to the long‐term planning horizon, the wide spectrum of potential policy packages, the need for effective and efficient implementation, the large geographical scale, the necessity to consider economic, social, and environmental goals, and the traveller’s response to the various action courses and their political acceptability (Shiftan et al., 2003). Moreover, with the inevitable trends on motorisation and urbanisation, the demand for land and mobility in cities is growing dramatically. Consequently, the problems of traffic congestion, environmental deterioration, air pollution, energy consumption, and community inequity etc., are becoming more and more critical for the society (EU, 2011). Certainly, this course is not sustainable in the long term. To address this challenge and achieve sustainable development, a long‐term perspective strategic urban plan, with its potentially important implications, should be established. This thesis contributes on assessing long‐term urban mobility by establishing an innovative methodology for optimizing and evaluating two types of transport demand management measures (TDM). The new methodology aims at relaxing the utility‐based decision‐making assumption by embedding anticipated‐regret and combined utilityregret decision mechanisms in an integrated transport planning framework. The proposed methodology includes two major aspects: 1) Construction of policy scenarios within a single measure or combined TDM policy‐packages using the survey method incorporating the regret theory. The purpose of building the TDM scenarios in this work is to address the specific implementation in terms of time frame and geographic scale for each TDM measure. Finally, 13 TDM scenarios are built in terms of the most desirable, the most expected and the least regret choice by means of the two‐round Delphi based survey. 2) Development of the combined utility‐regret analysis framework based on multicriteria decision analysis (MCDA). This assessment framework is used to compare the contribution of the TDM scenario towards sustainable mobility and to determine the best scenario considering not only the objective utility value obtained from the utilitybased MCDA, but also a regret value that is calculated via a regret‐based MCDA. The objective function of the utility‐based MCDA is integrated in a land use and transport interaction model and is used for optimizing and assessing the long term impacts of the constructed TDM scenarios. A regret based model, called referente dependent regret model (RDRM) is adapted to analyse the contribution of each TDM scenario in terms of a subjective point of view. The suggested methodology is implemented and validated in the case of Madrid. It defines a comprehensive technical procedure for assessing strategic effects of transport demand management measures, which can be useful, transparent and flexible planning tool both for planners and decision‐makers.
Resumo:
This paper analyzes issues which appear when supporting pruning operators in tabled LP. A version of the once/1 control predicate tailored for tabled predicates is presented, and an implementation analyzed and evaluated. Using once/1 with answer-on-demand strategies makes it possible to avoid computing unneeded solutions for problems which can benefit from tabled LP but in which only a single solution is needed, such as model checking and planning. The proposed version of once/1 is also directly applicable to the efficient implementation of other optimizations, such as early completion, cut-fail loops (to, e.g., prune at the top level), if-then-else, and constraint-based branch-and-bound optimization. Although once/1 still presents open issues such as dependencies of tabled solutions on program history, our experimental evaluation confirms that it provides an arbitrarily large efficiency improvement in several application areas.
Resumo:
Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.
Resumo:
A busca por produtos mais saudáveis e minimamente processados tem levado indústrias e pesquisadores a estudarem novas formas de preservação de alimentos. Os objetivos deste trabalho foram: 1) avaliar o efeito da embalagem com atmosfera modificada (ATM) na preservação de lombo ovino armazenado sob refrigeração e 2) Avaliar o efeito do processamento em alta pressão na conservação de carne bovina marinada e com teor de sódio reduzido. Em ambas as pesquisas, músculos Longissimus lumborum foram submetidos à contagem microbiana, avaliação de cor, pH, oxidação lipídica (TBARS), perdas por cocção (PPC) e força de cisalhamento. Para o estudo do efeito da embalagem em atmosfera modificada, as amostras foram acondicionadas em cinco sistemas de ATM, 15% O2 + 85% CO2; 30% de O2 + 70% de CO2; 45% de O2 + 55% de CO2; 60% de O2 + 40% de CO2 e Vácuo (controle) e armazenadas a 1°C durante 21 dias. As análises de cor, pH, TBARS, PPC e força de cisalhamento foram realizadas a cada sete dias e as microbiológicas duas vezes por semana. Diferentes concentrações de oxigênio dentro da embalagem trouxeram diferença significativa na intensidade de cor vermelha das carnes armazenadas em ATM. Até o sétimo dia de estocagem tratamentos com maior quantidade de O2 apresentaram melhor coloração, após esse período embalagens a vácuo conseguiram preservar melhor a mioglobina. Diferentes concentrações gasosas não trouxeram causaram diferença (p> 0,05) no pH da carne entre tratamentos. Nenhuma diferença significativa entre tratamentos foi encontrada para amostras embaladas em ATM nos parâmetros perda de peso por cocção e força de cisalhamento. A embalagem em atmosfera modificada foi capaz de retardar o crescimento da microbiota presente na carne. Isso levou á preservação da amostra por até 18 dias sob refrigeração, enquanto amostras a vácuo tiveram uma vida útil de 11 dias. Para o estudo do efeito da alta pressão em carne marinada com baixo teor de sódio, as carnes foram inoculadas com 106 UFC/g de carne com E. faecium e Listeria innocua e em seguida marinadas durante 18 horas, a 4°C, em diferentes soluções: 1% NaCl + 1% ácido cítrico, 1% NaCl + 2% ácido cítrico, 2% NaCl + 2% ácido cítrico e 2% NaCl + 2% ácido cítrico. Após a marinação as amostras foram submetidas ao tratamento nas seguintes pressões: Zero (controle), 300MPa, 450Mpa, 600MPa. As análises físico-químicas e microbiológicas foram realizadas logo após o tratamento. O tratamento em alta pressão foi capaz de reduzir a população microbiana em até seis ciclos logarítmicos quando 600Mpa foram aplicados em todas as soluções estudadas. A não aplicação de alta pressão proporcionou a redução de apenas um ciclo log na população de E. faecium quando as carnes foram marinadas com 2% NaCl + 2% ácido cítrico. A alta pressão e as diferentes concentrações de sal e ácido, não trouxeram diferença significativa na coloração das amostras. Já o maior teor de ácido cítrico na marinada causou maior (p<0,05) redução do pH da carne em comparação com as amostras em baixa concentração de ácido. Os experimentos demonstraram que a tanto embalagem a vácuo quanto a aplicação de ácido cítrico foram eficientes em retardar a oxidação lipídica. Pressões de 600Mpa tornaram a carne significativamente mais dura que as demais pressões aplicadas. Os resultados demonstraram a possibilidade de extensão da vida útil da carne refrigerada através da aplicação de diferentes tecnologias: a embalagem com atmosfera modificada para carne fresca e processamento em alta pressão de carnes marinadas com reduzido teor de sal.
Resumo:
Gradient-domain path tracing has recently been introduced as an efficient realistic image synthesis algorithm. This paper introduces a bidirectional gradient-domain sampler that outperforms traditional bidirectional path tracing often by a factor of two to five in terms of squared error at equal render time. It also improves over unidirectional gradient-domain path tracing in challenging visibility conditions, similarly as conventional bidirectional path tracing improves over its unidirectional counterpart. Our algorithm leverages a novel multiple importance sampling technique and an efficient implementation of a high-quality shift mapping suitable for bidirectional path tracing. We demonstrate the versatility of our approach in several challenging light transport scenarios.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
This paper investigates whether AspectJ can be used for efficient profiling of Java programs. Profiling differs from other applications of AOP (e.g. tracing), since it necessitates efficient and often complex interactions with the target program. As such, it was uncertain whether AspectJ could achieve this goal. Therefore, we investigate four common profiling problems (heap usage, object lifetime, wasted time and time-spent) and report on how well AspectJ handles them. For each, we provide an efficient implementation, discuss any trade-offs or limitations and present the results of an experimental evaluation into the costs of using it. Our conclusions are mixed. On the one hand, we find that AspectJ is sufficiently expressive to describe the four profiling problems and reasonably efficient in most cases. On the other hand, we find several limitations with the current AspectJ implementation that severely hamper its suitability for profiling. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has been actively considered as a potential candidate for long-haul transmission and 400 Gb/s to 1 Tb/s Ethernet transport because of its high spectral efficiency, efficient implementation, flexibility and robustness against linear impairments such as chromatic dispersion and polarization mode dispersion. However, due to the long symbol duration and narrow subcarrier spacing, CO-OFDM systems are sensitive to laser phase noise and fibre nonlinearity induced penalties. As a result, the development of CO-OFDM transmission technology crucially relies on efficient techniques to compensate for the laser phase noise and fibre nonlinearity impairments. In this thesis, high performance and low complexity digital signal processing techniques for laser phase noise and fibre nonlinearity compensation in CO-OFDM transmissions are demonstrated. For laser phase noise compensation, three novel techniques, namely quasipilot-aided, decision-directed-free blind and multiplier-free blind are introduced. For fibre nonlinear compensation, two novel techniques which are referred to as phase conjugated pilots and phase conjugated subcarrier coding, are proposed. All these abovementioned digital signal processing techniques offer high performances and flexibilities while requiring relatively low complexities in comparison with other existing phase noise and nonlinear compensation techniques. As a result of the developments of these digital signal processing techniques, CO-OFDM technology is expected to play a significant role in future ultra-high capacity optical network. In addition, this thesis also presents preliminary study on nonlinear Fourier transform based transmission schemes in which OFDM is a highly suitable modulation format. The obtained result paves the way towards a truly flexible nonlinear wave-division multiplexing system that allows the current nonlinear transmission limitations to be exceeded.
Resumo:
Sokoldalú nemzetközi megállapodások és programok foglalkoznak a hulladékok keletkezésének és az országhatárokon átterjedő hatásainak problémájával, a nemzetközi együttműködést szükségessé tevő megoldásokkal. A leginkább átfogó megközelítést a probléma hajtóerőinek szentelt globális programok tartalmazzák, de a fenntarthatóság alapelveire támaszkodó termelési eljárásokra és fogyasztási szokásokra, a zöld gazdaságra való áttérést szorgalmazó dokumentumok nem tartalmaznak számon kérhető kötelezettségeket. A konkrétabb hulladékkeletkezési és hatásterjedési ügyekben – mindenekelőtt a veszélyes hulladékok és a nemzeti fennhatóság alá nem tartozó területekre eljutó vagy ott keletkező hulladékok esetében – jóval konkrétabb nemzetközi megállapodások és programok léteznek. Ezek szabályozási, szakpolitikai, technológiai célokat, feladatokat határoznak meg az országok és az érintett ágazati szereplők szintjén is. Egyes problémák kapcsán és egyes térségekben a hulladékgazdálkodás jelentős eredményeket tud felmutatni, de általában véve a nemzetközi megállapodások hatékony végrehajtásával komoly gondok vannak. Továbbá a meglévő nemzetközi eszközök összességükben még teljes körű végrehajtásuk esetén sem lennének képesek ellensúlyozni a hulladékprobléma globális szintű növekedését. Következésképpen további erőfeszítésekre van szükség – minden kormány, érintett nem-kormányzati szervezet és az ágazatok részéről – különösen a megelőzés vonatkozásában. ____ Various multilateral agreements and programmes deal with the problems of waste generation and its transboundary impacts, and those solutions, which necessitate international co-operation. Those global programmes include the most comprehensive approach, which are dedicated to the drivers of these problems, however, these documents promoting the transition to sustainable production and consumption, or to the green economy do not consist of binding commitments. In case of more concrete issues of waste generation and transboundary impacts there are much more concrete international agreements and programmes, especially, for the hazardous waste streams and the waste transmitted to and/or generated in areas outside national jurisdiction. These determine regulatory, policy, technological goals and tasks for the participating countries and the relevant sectors. Significant progress is demonstrated for some specific problems and in certain regions, but in general, there are serious concerns about the efficient implementation of the international agreements in their entirety. Moreover, even if those were fully accomplished, the existing set of the international instruments would be unable to counterweigh the global increase of the waste problem. Consequently, further efforts are needed by all countries, the relevant non-governmental organisations and sectors, primarily in order to prevent the further global escalation of the problem.
Resumo:
The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
The real-time embedded systems design requires precise control of the passage of time in the computation performed by the modules and communication between them. Generally, these systems consist of several modules, each designed for a specific task and restricted communication with other modules in order to obtain the required timing. This strategy, called federated architecture, is already becoming unviable in front of the current demands of cost, required performance and quality of embedded system. To address this problem, it has been proposed the use of integrated architectures that consist of one or few circuits performing multiple tasks in parallel in a more efficient manner and with reduced costs. However, one has to ensure that the integrated architecture has temporal composability, ie the ability to design each task temporally isolated from the others in order to maintain the individual characteristics of each task. The Precision Timed Machines are an integrated architecture approach that makes use of multithreaded processors to ensure temporal composability. Thus, this work presents the implementation of a Precision Machine Timed named Hivek-RT. This processor which is a VLIW supporting Simultaneous Multithreading is capable of efficiently execute real-time tasks when compared to a traditional processor. In addition to the efficient implementation, the proposed architecture facilitates the implementation real-time tasks from a programming point of view.