859 resultados para Analog-to-digital converters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work involved the development of a smart system dedicated to surface burning detection in the grinding process through constant monitoring of the process by acoustic emission and electrical power signals. A program in Visual Basic® for Windows® was developed, which collects the signals through an analog-digital converter and further processes them using burning detection algorithms already known. Three other parameters are proposed here and a comparative study carried out. When burning occurs, the newly developed software program sends a control signal warning the operator or interrupting the process, and delivers process information via the Internet. Parallel to this, the user can also interfere in the process via Internet, changing parameters and/or monitoring the grinding process. The findings of a comparative study of the various parameters are also discussed here. Copyright © 2006 by ABCM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The displacement of writing emerged in urban centers due to its investment in technology and has been developed in major newspapers around the world. With the advent of the internet and technological development, the writing went from analog to digital. That made it no longer required the presence of the journalist in the physical redaction means modifying the various management structures of a writing routine and professional. This happened to capture, edit and distribute your notes, news and reports in online mode. This monograph aims to understand and define the macro change of virtualization in journalism, considering the stages of production of Le Monde Diplomatique, a newspaper whose writing is in process of deterritorialization, the people involved in this activity and the resources used and contributing to the debate on new forms of organization of an editorial environment

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthetic-heterodyne demodulation is a useful technique for dynamic displacement and velocity detection in interferometric sensors, as it can provide an output signal that is immune to interferometric drift. With the advent of cost-effective, high-speed real-time signal-processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. In synthetic heterodyne, to obtain the actual dynamic displacement or vibration of the object under test requires knowledge of the interferometer visibility and also the argument of two Bessel functions. In this paper, a method is described for determining the former and setting the Bessel function argument to a set value, which ensures maximum sensitivity. Conventional synthetic-heterodyne demodulation requires the use of two in-phase local oscillators; however, the relative phase of these oscillators relative to the interferometric signal is unknown. It is shown that, by using two additional quadrature local oscillators, a demodulated signal can be obtained that is independent of this phase difference. The experimental interferometer is aMichelson configuration using a visible single-mode laser, whose current is sinusoidally modulated at a frequency of 20 kHz. The detected interferometer output is acquired using a 250 kHz analog-to-digital converter and processed in real time. The system is used to measure the displacement sensitivity frequency response and linearity of a piezoelectric mirror shifter over a range of 500 Hz to 10 kHz. The experimental results show good agreement with two data-obtained independent techniques: the signal coincidence and denominated n-commuted Pernick method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

“Cartographic heritage” is different from “cartographic history”. The second term refers to the study of the development of surveying and drawing techniques related to maps, through time, i.e. through different types of cultural environment which were background for the creation of maps. The first term concerns the whole amount of ancient maps, together with these different types of cultural environment, which the history has brought us and which we perceive as cultural values to be preserved and made available to many users (public, institutions, experts). Unfortunately, ancient maps often suffer preservation problems of their analog support, mostly due to aging. Today, metric recovery in digital form and digital processing of historical cartography allow preserving map heritage. Moreover, modern geomatic techniques give us new chances of using historical information, which would be unachievable on analog supports. In this PhD thesis, the whole digital processing of recovery and elaboration of ancient cartography is reported, with a special emphasis on the use of digital tools in preservation and elaboration of cartographic heritage. It is possible to divide the workflow into three main steps, that reflect the chapter structure of the thesis itself: • map acquisition: conversion of the ancient map support from analog to digital, by means of high resolution scanning or 3D surveying (digital photogrammetry or laser scanning techniques); this process must be performed carefully, with special instruments, in order to reduce deformation as much as possible; • map georeferencing: reproducing in the digital image the native metric content of the map, or even improving it by selecting a large number of still existing ground control points; this way it is possible to understand the projection features of the historical map, as well as to evaluate and represent the degree of deformation induced by the old type of cartographic transformation (that can be unknown to us), by surveying errors or by support deformation, usually all errors of too high value with respect to our standards; • data elaboration and management in a digital environment, by means of modern software tools: vectorization, giving the map a new and more attractive graphic view (for instance, by creating a 3D model), superimposing it on current base maps, comparing it to other maps, and finally inserting it in GIS or WebGIS environment as a specific layer. The study is supported by some case histories, each of them interesting from the point of view of one digital cartographic elaboration step at least. The ancient maps taken into account are the following ones: • three maps of the Po river delta, made at the end of the XVI century by a famous land-surveyor, Ottavio Fabri (he is single author in the first map, co-author with Gerolamo Pontara in the second map, co-author with Bonajuto Lorini and others in the third map), who wrote a methodological textbook where he explains a new topographical instrument, the squadra mobile (mobile square) invented and used by himself; today all maps are preserved in the State Archive of Venice; • the Ichnoscenografia of Bologna by Filippo de’ Gnudi, made in the 1702 and today preserved in the Archiginnasio Library of Bologna; it is a scenographic view of the city, captured in a bird’s eye flight, but also with an icnographic value, as the author himself declares; • the map of Bologna by the periti Gregorio Monari and Antonio Laghi, the first map of the city derived from a systematic survey, even though it was made only ten years later (1711–1712) than the map by de’ Gnudi; in this map the scenographic view was abandoned, in favor of a more correct representation by means of orthogonal projection; today the map is preserved in the State Archive of Bologna; • the Gregorian Cadastre of Bologna, made in 1831 and updated until 1927, now preserved in the State Archive of Bologna; it is composed by 140 maps and 12 brogliardi (register volumes). In particular, the three maps of the Po river delta and the Cadastre were studied with respect to their acquisition procedure. Moreover, the first maps were analyzed from the georeferencing point of view, and the Cadastre was analyzed with respect to a possible GIS insertion. Finally, the Ichnoscenografia was used to illustrate a possible application of digital elaboration, such as 3D modeling. Last but not least, we must not forget that the study of an ancient map should start, whenever possible, from the consultation of the precious original analogical document; analysis by means of current digital techniques allow us new research opportunities in a rich and modern multidisciplinary context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a technique for online compression of ECG signals using the Golomb-Rice encoding algorithm. This is facilitated by a novel time encoding asynchronous analog-to-digital converter targeted for low-power, implantable, long-term bio-medical sensing applications. In contrast to capturing the actual signal (voltage) values the asynchronous time encoder captures and encodes the time information at which predefined changes occur in the signal thereby minimizing the sensor's energy use and the number of bits we store to represent the information by not capturing unnecessary samples. The time encoder transforms the ECG signal data to pure time information that has a geometric distribution such that the Golomb-Rice encoding algorithm can be used to further compress the data. An overall online compression rate of about 6 times is achievable without the usual computations associated with most compression methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The constant development of digital systems in radio communications demands the adaptation of the current receiving equipment to the new technologies. In this context, a new Software Defined Radio based receiver is being implemented with the aim of carrying out different experiments to analyze the propagation of signals through the atmosphere from a satellite beacon. The receiver selected for this task is the PERSEUS SDR from the Italian company Microtelecom s.r.l. It is a software defined VLF-LF-MF-HF receiver based on an outstanding direct sampling digital architecture which features a 14 bit 80 MSamples/s analog-to-digital converter, a high-performance FPGA-based digital down-converter and a high-speed 480 Mbit/s USB2.0 PC interface. The main goal is to implement the related software and adapt the new receiver to the current working environment. In this paper, SDR technology guidelines are given and PERSEUS receiver digital signal processing is presented with the most remarkable results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter presents a temperature-sensing technique on the basis of the temperature dependency of MOSFET leakage currents. To mitigate the effects of process variation, the ratio of two different leakage current measurements is calculated. Simulations show that this ratio is robust to process spread. The resulting sensor is quite small-0.0016 mm2 including an analog-to-digital conversion-and very energy efficient, consuming less than 640 pJ/conversion. After a two-point calibration, the accuracy in a range of 40°C-110°C is less than 1.5°C , which makes the technique suitable for thermal management applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Total Ionization Dose (TID) is traditionally measured by radiation sensitive FETs (RADFETs) that require a radiation hardened Analog-to-Digital Converter (ADC) stage. This work introduces a TID sensor based on a delay path whose propagation time is sensitive to the absorbed radiation. It presents the following advantages: it is a digital sensor able to be integrated in CMOS circuits and programmable systems such as FPGAs; it has a configurable sensitivity that allows to use this device for radiation doses ranging from very low to relatively high levels; its interface helps to integrate this sensor in a multidisciplinary sensor network; it is self-timed, hence it does not need a clock signal that can degrade its accuracy. The sensor has been prototyped in a 0.35μm technology, has an area of 0.047mm2, of which 22% is dedicated to measuring radiation, and an energy per conversion of 463pJ. Experimental irradiation tests have validated the correct response of the proposed TID sensor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el mundo actual las aplicaciones basadas en sistemas biométricos, es decir, aquellas que miden las señales eléctricas de nuestro organismo, están creciendo a un gran ritmo. Todos estos sistemas incorporan sensores biomédicos, que ayudan a los usuarios a controlar mejor diferentes aspectos de la rutina diaria, como podría ser llevar un seguimiento detallado de una rutina deportiva, o de la calidad de los alimentos que ingerimos. Entre estos sistemas biométricos, los que se basan en la interpretación de las señales cerebrales, mediante ensayos de electroencefalografía o EEG están cogiendo cada vez más fuerza para el futuro, aunque están todavía en una situación bastante incipiente, debido a la elevada complejidad del cerebro humano, muy desconocido para los científicos hasta el siglo XXI. Por estas razones, los dispositivos que utilizan la interfaz cerebro-máquina, también conocida como BCI (Brain Computer Interface), están cogiendo cada vez más popularidad. El funcionamiento de un sistema BCI consiste en la captación de las ondas cerebrales de un sujeto para después procesarlas e intentar obtener una representación de una acción o de un pensamiento del individuo. Estos pensamientos, correctamente interpretados, son posteriormente usados para llevar a cabo una acción. Ejemplos de aplicación de sistemas BCI podrían ser mover el motor de una silla de ruedas eléctrica cuando el sujeto realice, por ejemplo, la acción de cerrar un puño, o abrir la cerradura de tu propia casa usando un patrón cerebral propio. Los sistemas de procesamiento de datos están evolucionando muy rápido con el paso del tiempo. Los principales motivos son la alta velocidad de procesamiento y el bajo consumo energético de las FPGAs (Field Programmable Gate Array). Además, las FPGAs cuentan con una arquitectura reconfigurable, lo que las hace más versátiles y potentes que otras unidades de procesamiento como las CPUs o las GPUs.En el CEI (Centro de Electrónica Industrial), donde se lleva a cabo este TFG, se dispone de experiencia en el diseño de sistemas reconfigurables en FPGAs. Este TFG es el segundo de una línea de proyectos en la cual se busca obtener un sistema capaz de procesar correctamente señales cerebrales, para llegar a un patrón común que nos permita actuar en consecuencia. Más concretamente, se busca detectar cuando una persona está quedándose dormida a través de la captación de unas ondas cerebrales, conocidas como ondas alfa, cuya frecuencia está acotada entre los 8 y los 13 Hz. Estas ondas, que aparecen cuando cerramos los ojos y dejamos la mente en blanco, representan un estado de relajación mental. Por tanto, este proyecto comienza como inicio de un sistema global de BCI, el cual servirá como primera toma de contacto con el procesamiento de las ondas cerebrales, para el posterior uso de hardware reconfigurable sobre el cual se implementarán los algoritmos evolutivos. Por ello se vuelve necesario desarrollar un sistema de procesamiento de datos en una FPGA. Estos datos se procesan siguiendo la metodología de procesamiento digital de señales, y en este caso se realiza un análisis de la frecuencia utilizando la transformada rápida de Fourier, o FFT. Una vez desarrollado el sistema de procesamiento de los datos, se integra con otro sistema que se encarga de captar los datos recogidos por un ADC (Analog to Digital Converter), conocido como ADS1299. Este ADC está especialmente diseñado para captar potenciales del cerebro humano. De esta forma, el sistema final capta los datos mediante el ADS1299, y los envía a la FPGA que se encarga de procesarlos. La interpretación es realizada por los usuarios que analizan posteriormente los datos procesados. Para el desarrollo del sistema de procesamiento de los datos, se dispone primariamente de dos plataformas de estudio, a partir de las cuales se captarán los datos para después realizar el procesamiento: 1. La primera consiste en una herramienta comercial desarrollada y distribuida por OpenBCI, proyecto que se dedica a la venta de hardware para la realización de EEG, así como otros ensayos. Esta herramienta está formada por un microprocesador, un módulo de memoria SD para el almacenamiento de datos, y un módulo de comunicación inalámbrica que transmite los datos por Bluetooth. Además cuenta con el mencionado ADC ADS1299. Esta plataforma ofrece una interfaz gráfica que sirve para realizar la investigación previa al diseño del sistema de procesamiento, al permitir tener una primera toma de contacto con el sistema. 2. La segunda plataforma consiste en un kit de evaluación para el ADS1299, desde la cual se pueden acceder a los diferentes puertos de control a través de los pines de comunicación del ADC. Esta plataforma se conectará con la FPGA en el sistema integrado. Para entender cómo funcionan las ondas más simples del cerebro, así como saber cuáles son los requisitos mínimos en el análisis de ondas EEG se realizaron diferentes consultas con el Dr Ceferino Maestu, neurofisiólogo del Centro de Tecnología Biomédica (CTB) de la UPM. Él se encargó de introducirnos en los distintos procedimientos en el análisis de ondas en electroencefalogramas, así como la forma en que se deben de colocar los electrodos en el cráneo. Para terminar con la investigación previa, se realiza en MATLAB un primer modelo de procesamiento de los datos. Una característica muy importante de las ondas cerebrales es la aleatoriedad de las mismas, de forma que el análisis en el dominio del tiempo se vuelve muy complejo. Por ello, el paso más importante en el procesamiento de los datos es el paso del dominio temporal al dominio de la frecuencia, mediante la aplicación de la transformada rápida de Fourier o FFT (Fast Fourier Transform), donde se pueden analizar con mayor precisión los datos recogidos. El modelo desarrollado en MATLAB se utiliza para obtener los primeros resultados del sistema de procesamiento, el cual sigue los siguientes pasos. 1. Se captan los datos desde los electrodos y se escriben en una tabla de datos. 2. Se leen los datos de la tabla. 3. Se elige el tamaño temporal de la muestra a procesar. 4. Se aplica una ventana para evitar las discontinuidades al principio y al final del bloque analizado. 5. Se completa la muestra a convertir con con zero-padding en el dominio del tiempo. 6. Se aplica la FFT al bloque analizado con ventana y zero-padding. 7. Los resultados se llevan a una gráfica para ser analizados. Llegados a este punto, se observa que la captación de ondas alfas resulta muy viable. Aunque es cierto que se presentan ciertos problemas a la hora de interpretar los datos debido a la baja resolución temporal de la plataforma de OpenBCI, este es un problema que se soluciona en el modelo desarrollado, al permitir el kit de evaluación (sistema de captación de datos) actuar sobre la velocidad de captación de los datos, es decir la frecuencia de muestreo, lo que afectará directamente a esta precisión. Una vez llevado a cabo el primer procesamiento y su posterior análisis de los resultados obtenidos, se procede a realizar un modelo en Hardware que siga los mismos pasos que el desarrollado en MATLAB, en la medida que esto sea útil y viable. Para ello se utiliza el programa XPS (Xilinx Platform Studio) contenido en la herramienta EDK (Embedded Development Kit), que nos permite diseñar un sistema embebido. Este sistema cuenta con: Un microprocesador de tipo soft-core llamado MicroBlaze, que se encarga de gestionar y controlar todo el sistema; Un bloque FFT que se encarga de realizar la transformada rápida Fourier; Cuatro bloques de memoria BRAM, donde se almacenan los datos de entrada y salida del bloque FFT y un multiplicador para aplicar la ventana a los datos de entrada al bloque FFT; Un bus PLB, que consiste en un bus de control que se encarga de comunicar el MicroBlaze con los diferentes elementos del sistema. Tras el diseño Hardware se procede al diseño Software utilizando la herramienta SDK(Software Development Kit).También en esta etapa se integra el sistema de captación de datos, el cual se controla mayoritariamente desde el MicroBlaze. Por tanto, desde este entorno se programa el MicroBlaze para gestionar el Hardware que se ha generado. A través del Software se gestiona la comunicación entre ambos sistemas, el de captación y el de procesamiento de los datos. También se realiza la carga de los datos de la ventana a aplicar en la memoria correspondiente. En las primeras etapas de desarrollo del sistema, se comienza con el testeo del bloque FFT, para poder comprobar el funcionamiento del mismo en Hardware. Para este primer ensayo, se carga en la BRAM los datos de entrada al bloque FFT y en otra BRAM los datos de la ventana aplicada. Los datos procesados saldrán a dos BRAM, una para almacenar los valores reales de la transformada y otra para los imaginarios. Tras comprobar el correcto funcionamiento del bloque FFT, se integra junto al sistema de adquisición de datos. Posteriormente se procede a realizar un ensayo de EEG real, para captar ondas alfa. Por otro lado, y para validar el uso de las FPGAs como unidades ideales de procesamiento, se realiza una medición del tiempo que tarda el bloque FFT en realizar la transformada. Este tiempo se compara con el tiempo que tarda MATLAB en realizar la misma transformada a los mismos datos. Esto significa que el sistema desarrollado en Hardware realiza la transformada rápida de Fourier 27 veces más rápido que lo que tarda MATLAB, por lo que se puede ver aquí la gran ventaja competitiva del Hardware en lo que a tiempos de ejecución se refiere. En lo que al aspecto didáctico se refiere, este TFG engloba diferentes campos. En el campo de la electrónica:  Se han mejorado los conocimientos en MATLAB, así como diferentes herramientas que ofrece como FDATool (Filter Design Analysis Tool).  Se han adquirido conocimientos de técnicas de procesado de señal, y en particular, de análisis espectral.  Se han mejorado los conocimientos en VHDL, así como su uso en el entorno ISE de Xilinx.  Se han reforzado los conocimientos en C mediante la programación del MicroBlaze para el control del sistema.  Se ha aprendido a crear sistemas embebidos usando el entorno de desarrollo de Xilinx usando la herramienta EDK (Embedded Development Kit). En el campo de la neurología, se ha aprendido a realizar ensayos EEG, así como a analizar e interpretar los resultados mostrados en el mismo. En cuanto al impacto social, los sistemas BCI afectan a muchos sectores, donde destaca el volumen de personas con discapacidades físicas, para los cuales, este sistema implica una oportunidad de aumentar su autonomía en el día a día. También otro sector importante es el sector de la investigación médica, donde los sistemas BCIs son aplicables en muchas aplicaciones como, por ejemplo, la detección y estudio de enfermedades cognitivas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long term recording of biomedical signals such as ECG, EMG, respiration and other information (e.g. body motion) can improve diagnosis and potentially monitor the evolution of many widespread diseases. However, long term monitoring requires specific solutions, portable and wearable equipment that should be particularly comfortable for patients. The key-issues of portable biomedical instrumentation are: power consumption, long-term sensor stability, comfortable wearing and wireless connectivity. In this scenario, it would be valuable to realize prototypes using available technologies to assess long-term personal monitoring and foster new ways to provide healthcare services. The aim of this work is to discuss the advantages and the drawbacks in long term monitoring of biopotentials and body movements using textile electrodes embedded in clothes. The textile electrodes were embedded into garments; tiny shirt and short were used to acquire electrocardiographic and electromyographic signals. The garment was equipped with low power electronics for signal acquisition and data wireless transmission via Bluetooth. A small, battery powered, biopotential amplifier and three-axes acceleration body monitor was realized. Patient monitor incorporates a microcontroller, analog-to-digital signal conversion at programmable sampling frequencies. The system was able to acquire and to transmit real-time signals, within 10 m range, to any Bluetooth device (including PDA or cellular phone). The electronics were embedded in the shirt resulting comfortable to wear for patients. Small size MEMS 3-axes accelerometers were also integrated. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much has been written about transferring class materials and teaching techniques to digital platforms, but less has been written about applying heuristic organizing constructs in the same manner. With the transformation of learning ecologies over the past decades as well as requirements to adjust to constantly shifting digital tools and environments, the challenges for learning facilitators are to readily adapt and change, as well as to engage a changing learner demographic. However, most importantly is to engage most effectively with learners in these online environments. This article reviews the existing literature in the heuristic construct of academagogy [1] and applies a case study methodology to discussion of the first application of academagogy to the online delivery of an undergraduate design unit. Through a focus on effective teaching and learning techniques, the transfer from face-to-face (f2f) to the digital realm is explored through four main focal points: Tools for teaching, teaching and learning, communicating with students, and effective teaching methods. These four focal points are then used to discuss ways to meet the challenges of teaching online including how they create new dimensions in teaching practice and how the digital experience changes learning experiences. The article concludes with reflection and consolidation of the similarities and differences between the face-to-face and digital deliveries, and by suggesting changes to the academagogic heuristic to enable its use more easily in a digital space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.