995 resultados para Digital algorithms
Resumo:
Colour segmentation is the most commonly used method in road signs detection. Road sign contains several basic colours such as red, yellow, blue and white which depends on countries.The objective of this thesis is to do an evaluation of the four colour segmentation algorithms. Dynamic Threshold Algorithm, A Modification of de la Escalera’s Algorithm, the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm. The processing time and segmentation success rate as criteria are used to compare the performance of the four algorithms. And red colour is selected as the target colour to complete the comparison. All the testing images are selected from the Traffic Signs Database of Dalarna University [1] randomly according to the category. These road sign images are taken from a digital camera mounted in a moving car in Sweden.Experiments show that the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm are more accurate and stable to detect red colour of road signs. And the method could also be used in other colours analysis research. The yellow colour which is chosen to evaluate the performance of the four algorithms can reference Master Thesis of Yumei Liu.
Resumo:
This paper describes a speech enhancement system (SES) based on a TMS320C31 digital signal processor (DSP) for real-time application. The SES algorithm is based on a modified spectral subtraction method and a new speech activity detector (SAD) is used. The system presents a medium computational load and a sampling rate up to 18 kHz can be used. The goal is load and a sampling rate up to 18 kHz can be used. The goal is to use it to reduce noise in an analog telephone line.
Resumo:
This paper presents a dynamic programming approach for semi-automated road extraction from medium-and high-resolution images. This method is a modified version of a pre-existing dynamic programming method for road extraction from low-resolution images. The basic assumption of this pre-existing method is that roads manifest as lines in low-resolution images (pixel footprint> 2 m) and as such can be modeled and extracted as linear features. On the other hand, roads manifest as ribbon features in medium- and high-resolution images (pixel footprint ≤ 2 m) and, as a result, the focus of road extraction becomes the road centerlines. The original method can not accurately extract road centerlines from medium- and high- resolution images. In view of this, we propose a modification of the merit function of the original approach, which is carried out by a constraint function embedding road edge properties. Experimental results demonstrated the modified algorithm's potential in extracting road centerlines from medium- and high-resolution images.
Resumo:
This paper discusses two pitch detection algorithms (PDA) for simple audio signals which are based on zero-cross rate (ZCR) and autocorrelation function (ACF). As it is well known, pitch detection methods based on ZCR and ACF are widely used in signal processing. This work shows some features and problems in using these methods, as well as some improvements developed to increase their performance. © 2008 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
O presente trabalho trata da filtragem e reconstrução de sinais em frequência intermediária usando FPGA. É feito o desenvolvimento de algoritmos usando processamento digital de sinais e também a implementação dos mesmos, constando desde o projeto da placa de circuito impresso, montagem e teste. O texto apresenta um breve estudo de amostragem e reconstrução de sinais em geral. Especial atenção é dada à amostragem de sinais banda-passante e à análise de questões práticas de reconstrução de sinais em frequência intermediária. Dois sistemas de reconstrução de sinais baseados em processamento digital de sinais, mais especificamente reamostragem no domínio discreto, são apresentados e analisados. São também descritas teorias de processos de montagem e soldagem de placas eletrônicas com objetivo de definir uma metodologia de projeto, montagem e soldagem de placas eletrônicas. Tal metodologia é aplicada no projeto e manufatura do protótipo de um módulo de filtragem digital para repetidores de telefonia celular. O projeto, implementado usando FPGA, é baseado nos dois sistemas supracitados. Ao final do texto, resultados obtidos em experimentos de filtragem digital e reconstrução de sinais em frequência intermediária com o protótipo desenvolvido são apresentados.
Resumo:
Este trabalho apresenta um método para encontrar um conjunto de pontos de operação, os quais são ótimos de Pareto com diversidade, para linhas digitais de assinante (DSL - digital subscriber line). Em diversos trabalhos encontrados na literatura, têm sido propostos algoritmos para otimização da transmissão de dados em linhas DSL, que fornecem como resultado apenas um ponto de operação para os modems. Esses trabalhos utilizam, em geral, algoritmos de balanceamento de espectro para resolver um problema de alocação de potência, o que difere da abordagem apresentada neste trabalho. O método proposto, chamado de diverseSB , utiliza um processo híbrido composto de um algoritmo evolucionário multiobjetivo (MOEA - multi-objective evolutionary algorithm), mais precisamente, um algoritmo genético com ordenamento por não-dominância (NSGA-II - Non-Dominated Sorting Genetic Algorithm II), e usando ainda, um algoritmo de balanceamento de espectro. Os resultados obtidos por simulações mostram que, para uma dada diversidade, o custo computacional para determinar os pontos de operação com diversidade usando o algoritmo diverseSB proposto é muito menor que métodos de busca de “força bruta”. No método proposto, o NSGA-II executa chamadas ao algoritmo de balanceamento de espectro adotado, por isso, diversos testes envolvendo o mesmo número de chamadas ao algoritmo foram realizadas com o método diverseSB proposto e o método de busca por força bruta, onde os resultados obtidos pelo método diverseSB proposto foram bem superiores do que os resultados do método de busca por força bruta. Por exemplo, o método de força bruta realizando 1600 chamadas ao algoritmo de balanceamento de espectro, obtém um conjunto de pontos de operação com diversidade semelhante ao do método diverseSB proposto com 535 chamadas.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Most electronic systems can be described in a very simplified way as an assemblage of analog and digital components put all together in order to perform a certain function. Nowadays, there is an increasing tendency to reduce the analog components, and to replace them by operations performed in the digital domain. This tendency has led to the emergence of new electronic systems that are more flexible, cheaper and robust. However, no matter the amount of digital process implemented, there will be always an analog part to be sorted out and thus, the step of converting digital signals into analog signals and vice versa cannot be avoided. This conversion can be more or less complex depending on the characteristics of the signals. Thus, even if it is desirable to replace functions carried out by analog components by digital processes, it is equally important to do so in a way that simplifies the conversion from digital to analog signals and vice versa. In the present thesis, we have study strategies based on increasing the amount of processing in the digital domain in such a way that the implementation of analog hardware stages can be simplified. To this aim, we have proposed the use of very low quantized signals, i.e. 1-bit, for the acquisition and for the generation of particular classes of signals.
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.
Resumo:
This paper presents parallel recursive algorithms for the computation of the inverse discrete Legendre transform (DPT) and the inverse discrete Laguerre transform (IDLT). These recursive algorithms are derived using Clenshaw's recurrence formula, and they are implemented with a set of parallel digital filters with time-varying coefficients.
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)
Digital signal processing and digital system design using discrete cosine transform [student course]
Resumo:
The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.
Resumo:
The new knowledge environments of the digital age are oen described as places where we are all closely read, with our buying habits, location, and identities available to advertisers, online merchants, the government, and others through our use of the Internet. This is represented as a loss of privacy in which these entities learn about our activities and desires, using means that were unavailable in the pre-digital era. This article argues that the reciprocal nature of digital networks means 1) that the privacy issues that we face online are not radically different from those of the pre-Internet era, and 2) that we need to reconceive of close reading as an activity of which both humans and computer algorithms are capable.