17 resultados para EFFICIENT RED ELECTROLUMINESCENCE
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Solubilities of red 153, (3-[[4-[[5,6(or 6,7)-dichloro-2-benzothiazolyl]azo]phenyl]ethylamino]propanenitrile), an azo compound, and disperse blue1 (1,4,5,8-tetraaminoantraquinone) in supercritical carbon dioxide (SC CO(2)) were measured at T = (333.2 to 393.2) K over the pressure range (12.0 to 40.0) MPa by a flow type apparatus. The solubility of red 153 (0.985. 10(-6) to 37.2. 10(-6)) in the overall region of measurements is found to be significantly higher than that of disperse blue 1 (1.12.10(-7) to 4.89.10(-7)). The solubility behavior of disperse red 153 follows the general solubility trend displayed by disperse dyes with a crossover pressure at about 20 MPa. On the other hand, blue 1, which is a disperse anthraquinone dye, exhibits unexpected behavior not recorded previously there is no crossover pressure at the temperature and pressure ranges studied, and the dye's solubility at T = 333.2 K practically does not increase with pressure. To the best of our knowledge, there are no previous measurements of blue 1 solubility in SC CO(2) reported in the literature. The experimental data were correlated by using the Soave Redlich Kwong equation of state (EoS) with the one-fluid van der Waals mixing rule, and an acceptable correlation of the solubility data for both dyes was obtained.
Resumo:
The new hexanuclear mixed-valence vanadium complex [V3O3(OEt)(ashz)(2)(mu-OEt)](2) (1) with an N,O-donor ligand is reported. It acts as a highly efficient catalyst toward alkane oxidations by aqueous H2O2. Remarkably, high turnover numbers up to 25000 with product yields of up to 27% (based on alkane) stand for one of the most active systems for such reactions.
Resumo:
The hydrotris(pyrazol-1-yl)methane iron(II) complex [FeCl2{eta(3)-HC(pz)(3)}] (Fe, pz = pyrazol-1-yl) immobilized on commercial (MOR) or desilicated (MOR-D) zeolite, catalyses the oxidation of cyclohexane with hydrogen peroxide to cyclohexanol and cyclohexanone, under mild conditions. MOR-D/Fe (desilicated zeolite supported [FeCl2{eta(3)-HC(pz)(3)}] complex) provides an outstanding catalytic activity (TON up to 2.90 x 10(3)) with the concomitant overall yield of 38%, and can be easy recovered and reused. The MOR or MOR-D supported hydrotris(pyrazol-1-yl)methane iron(II) complex (MOR/Fe and MOR-D/Fe, respectively) was characterized by X-ray powder diffraction, ICP-AES, and TEM studies as well as by IR spectroscopy and N-2 adsorption at -196 degrees C. The catalytic operational conditions (e.g., reaction time, type and amount of oxidant, presence of acid and type of solvent) were optimized. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper describes an implementation of a long distance echo canceller, operating on full-duplex with hands-free and in real-time with a single Digital Signal Processor (DSP). The proposed solution is based on short length adaptive filters centered on the positions of the most significant echoes, which are tracked by time delay estimators, for which we use a new approach. To deal with double talking situations a speech detector is employed. The floating-point DSP TMS320C6713 from Texas Instruments is used with software written in C++, with compiler optimizations for fast execution. The resulting algorithm enables long distance echo cancellation with low computational requirements, suited for embbeded systems. It reaches greater echo return loss enhancement and shows faster convergence speed when compared to the conventional approach. The experimental results approach the CCITT G.165 recommendation levels.
Resumo:
Devido ao acréscimo significativo de viaturas e peões nas grandes cidades foi necessário recorrer aos mecanismos existentes para coordenar o tráfego. Nesta perspectiva surge a implementação de semáforos com o objectivo de ordenar o tráfego nas vias rodoviárias. A gestão de tráfego, tem sido sujeita a inovações tanto ao nível dos equipamentos, do software usado, gestão centralizada, monitorização das vias e na sincronização semafórica, sendo possível a criação de programas ajustados às diferentes exigências de tráfego verificadas durante as vinte e quatro horas para pontos distintos da cidade. Conceptualmente foram elaborados estudos, com o objectivo de identificar a relação entre a velocidade o fluxo e o intervalo num determinado intervalo de tempo, bem como a relação entre a velocidade e a sinistralidade. Até 1995 Portugal era um dos países com maior número de sinistros rodoviários Na sequência desta evolução foram instalados radares de controlo de velocidade no final de 2006 com o objectivo de obrigar ao cumprimento dos limites de velocidade impostos pelo código da estrada e reduzir a sinistralidade automóvel na cidade de Lisboa. Passados alguns anos sobre o investimento realizadoanteriormente, constatamos que existe a necessidade de implementar novas tecnologias na detecção das infracções, sejam estas de excesso de velocidade ou violação do semáforo vermelho (VSV), optimizar a informação disponibilizada aos automobilistas e aos peões, coordenar a interacção entre os veículos prioritários e os restantes presentes na via, dinamizar a gestão interna das contra ordenações, agilizar os procedimentos informatizar a recolha deinformação de modo a tornar os processos mais céleres.
Resumo:
This paper proposes an efficient scalable Residue Number System (RNS) architecture supporting moduli sets with an arbitrary number of channels, allowing to achieve larger dynamic range and a higher level of parallelism. The proposed architecture allows the forward and reverse RNS conversion, by reusing the arithmetic channel units. The arithmetic operations supported at the channel level include addition, subtraction, and multiplication with accumulation capability. For the reverse conversion two algorithms are considered, one based on the Chinese Remainder Theorem and the other one on Mixed-Radix-Conversion, leading to implementations optimized for delay and required circuit area. With the proposed architecture a complete and compact RNS platform is achieved. Experimental results suggest gains of 17 % in the delay in the arithmetic operations, with an area reduction of 23 % regarding the RNS state of the art. When compared with a binary system the proposed architecture allows to perform the same computation 20 times faster alongside with only 10 % of the circuit area resources.
Resumo:
This paper presents a single precision floating point arithmetic unit with support for multiplication, addition, fused multiply-add, reciprocal, square-root and inverse squareroot with high-performance and low resource usage. The design uses a piecewise 2nd order polynomial approximation to implement reciprocal, square-root and inverse square-root. The unit can be configured with any number of operations and is capable to calculate any function with a throughput of one operation per cycle. The floatingpoint multiplier of the unit is also used to implement the polynomial approximation and the fused multiply-add operation. We have compared our implementation with other state-of-the-art proposals, including the Xilinx Core-Gen operators, and conclude that the approach has a high relative performance/area efficiency. © 2014 Technical University of Munich (TUM).
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
The aim of the present work is to provide insight into the mechanism of laccase reactions using syringyl-type mediators. We studied the pH dependence and the kinetics of oxidation of syringyl-type phenolics using the low CotA and the high redox potential TvL laccases. Additionally, the efficiency of these compounds as redox mediators for the oxidation of non-phenolic lignin units was tested at different pH values and increasing mediator/non-phenolic ratios. Finally, the intermediates and products of reactions were identified by LC-MS and H-1 NMR. These approaches allow concluding on the (1) mechanism involved in the oxidation of phenolics by bacterial laccases, (2) importance of the chemical nature and properties of phenolic mediators, (3) apparent independence of the enzyme's properties on the yields of non-phenolics conversion, (4) competitive routes involved in the catalytic cycle of the laccase-mediator system with several new C-O coupling type structures being proposed.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
Mushroom strains contain complex nutritional biomolecules with a wide spectrum of therapeutic and prophylactic properties. Among these compounds, β-d-glucans play an important role in immuno-modulating and anti-tumor activities. The present work involves a novel colorimetric assay method for β-1,3-d-glucans with a triple helix tertiary structure by using Congo red. The specific interaction that occurs between Congo red and β-1,3-d-glucan was detected by bathochromic shift from 488 to 516 nm (> 20 nm) in UV–Vis spectrophotometer. A micro- and high throughput method based on a 96-well microtiter plate was devised which presents several advantages over the published methods since it requires only 1.51 μg of polysaccharides in samples, greater sensitivity, speed, assay of many samples and very cheap. β-d-Glucans of several mushrooms (i.e., Coriolus versicolor, Ganoderma lucidum, Pleurotus ostreatus, Ganoderma carnosum, Hericium erinaceus, Lentinula edodes, Inonotus obliquus, Auricularia auricular, Polyporus umbellatus, Cordyseps sinensis, Agaricus blazei, Poria cocos) were isolated by using a sequence of several extractions with cold and boiling water, acidic and alkaline conditions and quantified by this microtiter plate method. FTIR spectroscopy was used to study the structural features of β-1,3-d-glucans in these mushroom samples as well as the specific interaction of these polysaccharides with Congo red. The effect of NaOH on triple helix conformation of β-1,3-d-glucans was investigated in several mushroom species.