893 resultados para Discrete Cosine Transform (DCT)
Resumo:
This study investigated fingermark residues using Fourier transform infrared microscopy (μ- FTIR) in order to obtain fundamental information about the marks' initial composition and aging kinetics. This knowledge would be an asset for fundamental research on fingermarks, such as for dating purposes. Attenuated Total Reflection (ATR) and single-point reflection modes were tested on fresh fingermarks. ATR proved to be better suited and this mode was subsequently selected for further aging studies. Eccrine and sebaceous material was found in fresh and aged fingermarks and the spectral regions 1000-1850 cm-1 and 2700-3600 cm-1 were identified as the most informative. The impact of substrates (aluminium and glass slides) and storage conditions (storage in the light and in the dark) on fingermark aging was also studied. Chemometric analyses showed that fingermarks could be grouped according to their age regardless of the substrate when they were stored in an open box kept in an air-conditioned laboratory at around 20°C next to a window. On the contrary, when fingermarks were stored in the dark, only specimens deposited on the same substrate could be grouped by age. Thus, the substrate appeared to influence aging of fingermarks in the dark. Furthermore, PLS regression analyses were conducted in order to study the possibility of modelling fingermark aging for potential fingermark dating applications. The resulting models showed an overall precision of ±3 days and clearly demonstrated their capability to differentiate older fingermarks (20 and 34-days old) from newer ones (1, 3, 7 and 9-days old) regardless of the substrate and lighting conditions. These results are promising from a fingermark dating perspective. Further research is required to fully validate such models and assess their robustness and limitations in uncontrolled casework conditions.
Resumo:
We study discrete-time models in which death benefits can depend on a stock price index, the logarithm of which is modeled as a random walk. Examples of such benefit payments include put and call options, barrier options, and lookback options. Because the distribution of the curtate-future-lifetime can be approximated by a linear combination of geometric distributions, it suffices to consider curtate-future-lifetimes with a geometric distribution. In binomial and trinomial tree models, closed-form expressions for the expectations of the discounted benefit payment are obtained for a series of options. They are based on results concerning geometric stopping of a random walk, in particular also on a version of the Wiener-Hopf factorization.
Resumo:
In this paper we focus our attention on a particle that follows a unidirectional quantum walk, an alternative version of the currently widespread discrete-time quantum walk on a line. Here the walker at each time step can either remain in place or move in a fixed direction, e.g., rightward or upward. While both formulations are essentially equivalent, the present approach leads us to consider discrete Fourier transforms, which eventually results in obtaining explicit expressions for the wave functions in terms of finite sums and allows the use of efficient algorithms based on the fast Fourier transform. The wave functions here obtained govern the probability of finding the particle at any given location but determine as well the exit-time probability of the walker from a fixed interval, which is also analyzed.
Resumo:
Ordered weighted averaging (OWA) operators and their extensions are powerful tools used in numerous decision-making problems. This class of operator belongs to a more general family of aggregation operators, understood as discrete Choquet integrals. Aggregation operators are usually characterized by indicators. In this article four indicators usually associated with the OWA operator are extended to discrete Choquet integrals: namely, the degree of balance, the divergence, the variance indicator and Renyi entropies. All of these indicators are considered from a local and a global perspective. Linearity of indicators for linear combinations of capacities is investigated and, to illustrate the application of results, indicators of the probabilistic ordered weighted averaging -POWA- operator are derived. Finally, an example is provided to show the application to a specific context.
Resumo:
Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
This paper presents a new numerical program able to model syntectonic sedimentation. The new model combines a discrete element model of the tectonic deformation of a sedimentary cover and a process-based model of sedimentation in a single framework. The integration of these two methods allows us to include the simulation of both sedimentation and deformation processes in a single and more effective model. The paper describes briefly the antecedents of the program, Simsafadim-Clastic and a discrete element model, in order to introduce the methodology used to merge both programs to create the new code. To illustrate the operation and application of the program, analysis of the evolution of syntectonic geometries in an extensional environment and also associated with thrust fault propagation is undertaken. Using the new code, much more complex and realistic depositional structures can be simulated together with a more complex analysis of the evolution of the deformation within the sedimentary cover, which is seen to be affected by the presence of the new syntectonic sediments.
Resumo:
Describes a method to code a decimated model of an isosurface on an octree representation while maintaining volume data if it is needed. The proposed technique is based on grouping the marching cubes (MC) patterns into five configurations according the topology and the number of planes of the surface that are contained in a cell. Moreover, the discrete number of planes on which the surface lays is fixed. Starting from a complete volume octree, with the isosurface codified at terminal nodes according to the new configuration, a bottom-up strategy is taken for merging cells. Such a strategy allows one to implicitly represent co-planar faces in the upper octree levels without introducing any error. At the end of this merging process, when it is required, a reconstruction strategy is applied to generate the surface contained in the octree intersected leaves. Some examples with medical data demonstrate that a reduction of up to 50% in the number of polygons can be achieved
Resumo:
Tot seguit presentem un entorn per analitzar senyals de tot tipus amb LDB (Local Discriminant Bases) i MLDB (Modified Local Discriminant Bases). Aquest entorn utilitza funcions desenvolupades en el marc d’una tesi en fase de desenvolupament. Per entendre part d’aquestes funcions es requereix un nivell de coneixement avançat de processament de senyals. S’han extret dels treballs realitzats per Naoki Saito [3], que s’han agafat com a punt de partida per la realització de l’algorisme de la tesi doctoral no finalitzada de Jose Antonio Soria. Aquesta interfície desenvolupada accepta la incorporació de nous paquets i funcions. Hem deixat un menú preparat per integrar Sinus IV packet transform i Cosine IV packet transform, tot i que també podem incorporar-n’hi altres. L’aplicació consta de dues interfícies, un Assistent i una interfície principal. També hem creat una finestra per importar i exportar les variables desitjades a diferents entorns. Per fer aquesta aplicació s’han programat tots els elements de les finestres, en lloc d’utilitzar el GUIDE (Graphical User Interface Development Enviroment) de MATLAB, per tal que sigui compatible entre les diferents versions d’aquest programa. En total hem fet 73 funcions en la interfície principal (d’aquestes, 10 pertanyen a la finestra d’importar i exportar) i 23 en la de l’Assistent. En aquest treball només explicarem 6 funcions i les 3 de creació d’aquestes interfícies per no fer-lo excessivament extens. Les funcions que explicarem són les més importants, ja sigui perquè s’utilitzen sovint, perquè, segons la complexitat McCabe, són les més complicades o perquè són necessàries pel processament del senyal. Passem cada entrada de dades per part de l’usuari per funcions que ens detectaran errors en aquesta entrada, com eliminació de zeros o de caràcters que no siguin números, com comprovar que són enters o que estan dins dels límits màxims i mínims que li pertoquen.
Resumo:
Signal processing methods based on the combined use of the continuous wavelet transform (CWT) and zero-crossing technique were applied to the simultaneous spectrophotometric determination of perindopril (PER) and indapamide (IND) in tablets. These signal processing methods do not require any priory separation step. Initially, various wavelet families were tested to identify the optimum signal processing giving the best recovery results. From this procedure, the Haar and Biorthogonal1.5 continuous wavelet transform (HAAR-CWT and BIOR1.5-CWT, respectively) were found suitable for the analysis of the related compounds. After transformation of the absorbance vectors by using HAAR-CWT and BIOR1.5-CWT, the CWT-coefficients were drawn as a graph versus wavelength and then the HAAR-CWT and BIOR1.5-CWT spectra were obtained. Calibration graphs for PER and IND were obtained by measuring the CWT amplitudes at 231.1 and 291.0 nm in the HAAR-CWT spectra and at 228.5 and 246.8 nm in BIOR1.5-CWT spectra, respectively. In order to compare the performance of HAAR-CWT and BIOR1.5-CWT approaches, derivative spectrophotometric (DS) method and HPLC as comparison methods, were applied to the PER-IND samples. In this DS method, first derivative absorbance values at 221.6 for PER and 282.7 nm for IND were used to obtain the calibration graphs. The validation of the CWT and DS signal processing methods was carried out by using the recovery study and standard addition technique. In the following step, these methods were successfully applied to the commercial tablets containing PER and IND compounds and good accuracy and precision were reported for the experimental results obtained by all proposed signal processing methods.
Resumo:
We propose an analytical method based on fourier transform infrared-attenuated total reflectance (FTIR-ATR) spectroscopy to detect the adulteration of petrodiesel and petrodiesel/palm biodiesel blends with African crude palm oil. The infrared spectral fingerprints from the sample analysis were used to perform principal components analysis (PCA) and to construct a prediction model using partial least squares (PLS) regression. The PCA results separated the samples into three groups, allowing identification of those subjected to adulteration with palm oil. The obtained model shows a good predictive capacity for determining the concentration of palm oil in petrodiesel/biodiesel blends. Advantages of the proposed method include cost-effectiveness and speed; it is also environmentally friendly.
Resumo:
Agroindustrial waste in general presents significant levels of nutrients and organic matter and has therefore been frequently put to agricultural use. In this context, the objective of this study was to determine the chemical composition, nitrogen, phosphorus, potassium, calcium, magnesium and carbon content, as well as the qualitative characteristics through Fourier transform infrared spectroscopy of four samples of poultry litter and one sample of cattle manure, from the southwestern region of Paraná, Brazil. Results revealed that, in general, the poultry litter presented higher amount of nutrients and carbon than the cattle manure. The infrared spectra allowed identification of the functional groups present and the differences in degree of sample humification. The statistical treatment confirmed the quantitative and qualitative differences revealed.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.