872 resultados para discrete wavelet transform
Resumo:
The outer-sphere redox behaviour of a series of [LnCoIII-NCFeII(CN)(5)](-) (L-n = n-membered pentadentate aza-macrocycle) complexes have been studied as a function of pH and oxidising agent. All the dinuclear complexes show a double protonation process at pH approximate to 2 that produces a shift in their UV/Vis spectra. Oxidation of the different non-protonated and diprotonated complexes has been carried out with peroxodisulfate, and of the non-protonated complexes also with trisoxalatocobaltate(III). The results are in agreement with predictions from the Marcus theory. The oxidation of [Fe(phen)(3)](3+) and [IrCl6](2-) is too fast to be measured, although for the latter the transient observation of the process has been achieved at pH = 0. The study of the kinetics of the outer-sphere redox process, with the S2O82- and [Co(ox)(3)](3-) oxidants, has been carried out as a function of pH, temperature, and pressure. As a whole, the values found for the activation volumes, entropies, and enthalpies are in the following margins, for the diprotonated and non-protonated dinuclear complexes, respectively: DeltaV(not equal) from 11 to 13 and 15 to 20 cm(3) mol(-1); DeltaS(not equal) from 110 to 30 and -60 to -90 J K-1 mol(-1); DeltaH(not equal) from 115 to 80 and 50 to 65 kJ.mol(-1). The thermal activation parameters are clearly dominated by the electrostriction occurring on outer-sphere precursor formation, while the trends found for the values of the volume of activation indicate an important degree of tuning due to the charge distribution during the electron transfer process. The special arrangement on the amine ligands in the isomer trans[(L14CoNCFeII)-N-III(CN)(5)](-) accounts for important differences in solvent-assisted hydrogen bonding occurring within the outer-sphere redox process, as has been established in redox reactions of similar compounds. ((C) Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2003).
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
This paper studies the human DNA in the perspective of signal processing. Six wavelets are tested for analyzing the information content of the human DNA. By adopting real Shannon wavelet several fundamental properties of the code are revealed. A quantitative comparison of the chromosomes and visualization through multidimensional and dendograms is developed.
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Integration of an automatic storage and retrieval system (ASRS) in a discrete-part automation system
Resumo:
This technical report describes the work carried out in a project within the ERASMUS programme. The objective of this project was the Integration of an Automatic Warehouse in a Discrete-Part Automation System. The discrete-part automation system located at the LASCRI (Critical Systems) laboratory at ISEP was extended with automatic storage and retrieval of the manufacturing parts, through the integration of an automatic warehouse and an automatic guided vehicle (AGV).
Resumo:
The goal of this study is to analyze the dynamical properties of financial data series from nineteen worldwide stock market indices (SMI) during the period 1995–2009. SMI reveal a complex behavior that can be explored since it is available a considerable volume of data. In this paper is applied the window Fourier transform and methods of fractional calculus. The results reveal classification patterns typical of fractional order systems.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e. g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 x 4,320 at 30 fps) in real time.
Resumo:
The goal of this study is the analysis of the dynamical properties of financial data series from worldwide stock market indexes during the period 2000–2009. We analyze, under a regional criterium, ten main indexes at a daily time horizon. The methods and algorithms that have been explored for the description of dynamical phenomena become an effective background in the analysis of economical data. We start by applying the classical concepts of signal analysis, fractional Fourier transform, and methods of fractional calculus. In a second phase we adopt the multidimensional scaling approach. Stock market indexes are examples of complex interacting systems for which a huge amount of data exists. Therefore, these indexes, viewed from a different perspectives, lead to new classification patterns.
Resumo:
With very few exceptions, M > 4 tectonic earthquakes in the Azores show normal fault solution and occur away from the islands. Exceptionally, the 1998 shock was pure strike-slip and occurred within the northern edge of the Pico-Faial Ridge. Fault plane solutions show two possible planes of rupture striking ENE-WSW (dextral) and NNW-SSE (sinistral). The former has not been recognised in the Azores, but is parallel to the transform direction related to the relative motion between the Eurasia and Nubia plates. Therefore, the main question we address in the present study is: do transform faults related to the Eurasia/Nubia plate boundary exist in the Azores? Knowing that the main source of strain is related to plate kinematics, we conclude that the sinistral strike-slip NNW-SSE fault plane solution is not consistent with either the fault dip (ca. 65, which is typical of a normal fault) or the ca. ENE-WSW direction of maximum extension; both are consistent with a normal fault, as observed in most major earthquakes on faults striking around NNW-SSE in the Azores. In contrast, the dextral strike-slip ENE-WSW fault plane solution is consistent with the transform direction related to the anticlockwise rotation of Nubia relative to Eurasia. Altogether, tectonic data, measured ground motion, observed destruction, and modelling are consistent with a dextral strike-slip source fault striking ENE-WSW. Furthermore, the bulk clockwise rotation measured by GPS is typical of bookshelf block rotations observed at the termination of such master strike-slip faults. Therefore, we suggest that the 1998 earthquake can be related to the WSW termination of a transform (ENE-WSW fault plane solution) associated with the Nubia-Eurasia diffuse plate boundary. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
This study addresses the optimization of rational fraction approximations for the discrete-time calculation of fractional derivatives. The article starts by analyzing the standard techniques based on Taylor series and Padé expansions. In a second phase the paper re-evaluates the problem in an optimization perspective by tacking advantage of the flexibility of the genetic algorithms.
Resumo:
We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.
Resumo:
A genetic algorithm used to design radio-frequency binary-weighted differential switched capacitor arrays (RFDSCAs) is presented in this article. The algorithm provides a set of circuits all having the same maximum performance. This article also describes the design, implementation, and measurements results of a 0.25 lm BiCMOS 3-bit RFDSCA. The experimental results show that the circuit presents the expected performance up to 40 GHz. The similarity between the evolutionary solutions, circuit simulations, and measured results indicates that the genetic synthesis method is a very useful tool for designing optimum performance RFDSCAs.