857 resultados para Discrete wavelet packet transform
Resumo:
O processamento de imagens tem sido amplamente utilizado para duas tarefas. Uma delas é o realce de imagens para a posterior visualização e a outra tarefa é a extração de informações para análise de imagens. Este trabalho apresenta um estudo sobre duas teorias multi-escalas chamadas de espaço de escala e transformada wavelet, que são utilizadas para a extração de informações de imagens. Um dos aspectos do espaço de escalas que tem sido amplamente discutido por diversos autores é a sua base (originalmente a gaussiana). Tem se buscado saber se a base gaussiana é a melhor, ou para quais casos ela é a melhor. Além disto, os autores têm procurado desenvolver novas bases, com características diferentes das pertencentes à gaussiana. De posse destas novas bases, pode-se compará-las com a base gaussiana e verificar onde cada base apresenta melhor desempenho. Neste trabalho, foi usada (i) a teoria do espaço de escalas, (ii) a teoria da transformada wavelet e (iii) as relações entre elas, a fim de gerar um método para criar novas bases para o espaço de escalas a partir de funções wavelets. O espaço de escala é um caso particular da transformada wavelet quando se usam as derivadas da gaussiana para gerar os operadores do espaço de escala. É com base nesta característica que se propôs o novo método apresentado. Além disto, o método proposto usa a resposta em freqüência das funções analisadas. As funções bases do espaço de escala possuem resposta em freqüência do tipo passa baixas. As funções wavelets, por sua vez, possuem resposta do tipo passa faixas Para obter as funções bases a partir das wavelets faz-se a integração numérica destas funções até que sua resposta em freqüência seja do tipo passa baixas. Algumas das funções wavelets estudadas não possuem definição para o caso bi-dimensional, por isso foram estudadas três formas de gerar funções bi-dimensionais a partir de funções unidimensionais. Com o uso deste método foi possível gerar dez novas bases para o espaço de escala. Algumas dessas novas bases apresentaram comportamento semelhante ao apresentado pela base gaussiana, outras não. Para as funções que não apresentaram o comportamento esperado, quando usadas com as definições originais dos operadores do espaço de escala, foram propostas novas definições para tais operadores (detectores de borda e bolha). Também foram geradas duas aplicações com o espaço de escala, sendo elas um algoritmo para a segmentação de cavidades cardíacas e um algoritmo para segmentação e contagem de células sanguíneas.
Resumo:
We analyze simultaneous discrete public good games wi.th incomplete information and continuous contributions. To use the terminology of Admati and Perry (1991). we consider comribution and subscription games. In the former. comrioutions are :1ot rcfunded if the project is not completed. while in thp. iatter they are. For the special case whp.re provision by a single player is possible we show the existence of an equilibrium in Doth cootribution and subscription games where a player decides to provide the good by himself. For the case where is not feasible for a single player to provide the good by himself, we show that any equilibriwn of both games is inefficient. WE also provide a sufficient condition for "contributing zero" to be the unique equilibrium of the contribution garoe with n players and characterize e
Resumo:
We analyze simultaneous discrete public good games with incomplete information and continuous contributions. To use the tenninology of Admati and Perry (1991), we consider contribution and subscription games. In the former, contributions are not refunded ifthe project is not completed, while in the latter they are. For the special case where provision by a single player is possible we show the existence of an equihbrium in both contnbution and subscription games where a player decides to provide the good by himself. For the case where is not feasible for a single player to provide the good by himself: we show that there exist equilibria of the subscription game where each participant pays the same amount. Moreover, using the technical apparatus from Myerson (1981) we show that neither the subscription nor the contribution games admit ex-post eÁ cient equibbria. hl addition. we provide a suÁ cient condition for êontributing zero 'to be the unique equihbrium of the contnbution game with n players.
Resumo:
Economists and policymakers have long been concerned with increasing the supply of health professionals in rural and remote areas. This work seeks to understand which factors influence physicians’ choice of practice location right after completing residency. Differently from previous papers, we analyse the Brazilian missalocation and assess the particularities of developing countries. We use a discrete choice model approach with a multinomial logit specification. Two rich databases are employed containing the location and wage of formally employed physicians as well as details from their post-graduation. Our main findings are that amenities matter, physicians have a strong tendency to remain in the region they completed residency and salaries are significant in the choice of urban, but not rural, communities. We conjecture this is due to attachments built during training and infrastructure concerns.
Resumo:
When estimating policy parameters, also known as treatment effects, the assignment to treatment mechanism almost always causes endogeneity and thus bias many of these policy parameters estimates. Additionally, heterogeneity in program impacts is more likely to be the norm than the exception for most social programs. In situations where these issues are present, the Marginal Treatment Effect (MTE) parameter estimation makes use of an instrument to avoid assignment bias and simultaneously to account for heterogeneous effects throughout individuals. Although this parameter is point identified in the literature, the assumptions required for identification may be strong. Given that, we use weaker assumptions in order to partially identify the MTE, i.e. to stablish a methodology for MTE bounds estimation, implementing it computationally and showing results from Monte Carlo simulations. The partial identification we perfom requires the MTE to be a monotone function over the propensity score, which is a reasonable assumption on several economics' examples, and the simulation results shows it is possible to get informative even in restricted cases where point identification is lost. Additionally, in situations where estimated bounds are not informative and the traditional point identification is lost, we suggest a more generic method to point estimate MTE using the Moore-Penrose Pseudo-Invese Matrix, achieving better results than traditional methods.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
In last decades, neural networks have been established as a major tool for the identification of nonlinear systems. Among the various types of networks used in identification, one that can be highlighted is the wavelet neural network (WNN). This network combines the characteristics of wavelet multiresolution theory with learning ability and generalization of neural networks usually, providing more accurate models than those ones obtained by traditional networks. An extension of WNN networks is to combine the neuro-fuzzy ANFIS (Adaptive Network Based Fuzzy Inference System) structure with wavelets, leading to generate the Fuzzy Wavelet Neural Network - FWNN structure. This network is very similar to ANFIS networks, with the difference that traditional polynomials present in consequent of this network are replaced by WNN networks. This paper proposes the identification of nonlinear dynamical systems from a network FWNN modified. In the proposed structure, functions only wavelets are used in the consequent. Thus, it is possible to obtain a simplification of the structure, reducing the number of adjustable parameters of the network. To evaluate the performance of network FWNN with this modification, an analysis of network performance is made, verifying advantages, disadvantages and cost effectiveness when compared to other existing FWNN structures in literature. The evaluations are carried out via the identification of two simulated systems traditionally found in the literature and a real nonlinear system, consisting of a nonlinear multi section tank. Finally, the network is used to infer values of temperature and humidity inside of a neonatal incubator. The execution of such analyzes is based on various criteria, like: mean squared error, number of training epochs, number of adjustable parameters, the variation of the mean square error, among others. The results found show the generalization ability of the modified structure, despite the simplification performed
Resumo:
This work proposes the development of a Computer System for Analysis of Mammograms SCAM, that aids the doctor specialist in the identification and analysis of existent lesions in digital mammograms. The computer system for digital mammograms processing will make use of a group of techniques of Digital Image Processing (DIP), with the purpose of aiding the medical professional to extract the information contained in the mammogram. This system possesses an interface of easy use for the user, allowing, starting from the supplied mammogram, a group of processing operations, such as, the enrich of the images through filtering techniques, the segmentation of areas of the mammogram, the calculation the area of the lesions, thresholding the lesion, and other important tools for the medical professional's diagnosis. The Wavelet Transform will used and integrated into the computer system, with the objective of allowing a multiresolution analysis, thus supplying a method for identifying and analyzing microcalcifications
Resumo:
Embedded systems are widely spread nowadays. An example is the Digital Signal Processor (DSP), which is a high processing power device. This work s contribution consist of exposing DSP implementation of the system logic for detecting leaks in real time. Among the various methods of leak detection available today this work uses a technique based on the pipe pressure analysis and usesWavelet Transform and Neural Networks. In this context, the DSP, in addition to do the pressure signal digital processing, also communicates to a Global Positioning System (GPS), which helps in situating the leak, and to a SCADA, sharing information. To ensure robustness and reliability in communication between DSP and SCADA the Modbus protocol is used. As it is a real time application, special attention is given to the response time of each of the tasks performed by the DSP. Tests and leak simulations were performed using the structure of Laboratory of Evaluation of Measurement in Oil (LAMP), at Federal University of Rio Grande do Norte (UFRN)
Resumo:
Wavelet coding is an efficient technique to overcome the multipath fading effects, which are characterized by fluctuations in the intensity of the transmitted signals over wireless channels. Since the wavelet symbols are non-equiprobable, modulation schemes play a significant role in the overall performance of wavelet systems. Thus the development of an efficient design method is crucial to obtain modulation schemes suitable for wavelet systems, principally when these systems employ wavelet encoding matrixes of great dimensions. In this work, it is proposed a design methodology to obtain sub-optimum modulation schemes for wavelet systems over Rayleigh fading channels. In this context, novels signal constellations and quantization schemes are obtained via genetic algorithm and mathematical tools. Numerical results obtained from simulations show that the wavelet-coded systems derived here have very good performance characteristics over fading channels
Resumo:
This paper proposes a method based on the theory of electromagnetic waves reflected to evaluate the behavior of these waves and the level of attenuation caused in bone tissue. For this, it was proposed the construction of two antennas in microstrip structure with resonance frequency at 2.44 GHz The problem becomes relevant because of the diseases osteometabolic reach a large portion of the population, men and women. With this method, the signal is classified into two groups: tissue mass with bony tissues with normal or low bone mass. For this, techniques of feature extraction (Wavelet Transform) and pattern recognition (KNN and ANN) were used. The tests were performed on bovine bone and tissue with chemicals, the methodology and results are described in the work
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
Wavelet coding has emerged as an alternative coding technique to minimize the fading effects of wireless channels. This work evaluates the performance of wavelet coding, in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, main international standards reference for GSM, UMTS, and EDGE applications. The results show the wavelet coding s efficiency against the inter symbolic interference which characterizes these communication scenarios. This robustness of the presented technique enables its usage in different environments, bringing it one step closer to be applied in practical wireless communication systems
Resumo:
There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input