899 resultados para digital signal processor


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Red blood cell (RBC) parameters such as morphology, volume, refractive index, and hemoglobin content are of great importance for diagnostic purposes. Existing approaches require complicated calibration procedures and robust cell perturbation. As a result, reference values for normal RBC differ depending on the method used. We present a way for measuring parameters of intact individual RBCs by using digital holographic microscopy (DHM), a new interferometric and label-free technique with nanometric axial sensitivity. The results are compared with values achieved by conventional techniques for RBC of the same donor and previously published figures. A DHM equipped with a laser diode (lambda = 663 nm) was used to record holograms in an off-axis geometry. Measurements of both RBC refractive indices and volumes were achieved via monitoring the quantitative phase map of RBC by means of a sequential perfusion of two isotonic solutions with different refractive indices obtained by the use of Nycodenz (decoupling procedure). Volume of RBCs labeled by membrane dye Dil was analyzed by confocal microscopy. The mean cell volume (MCV), red blood cell distribution width (RDW), and mean cell hemoglobin concentration (MCHC) were also measured with an impedance volume analyzer. DHM yielded RBC refractive index n = 1.418 +/- 0.012, volume 83 +/- 14 fl, MCH = 29.9 pg, and MCHC 362 +/- 40 g/l. Erythrocyte MCV, MCH, and MCHC achieved by an impedance volume analyzer were 82 fl, 28.6 pg, and 349 g/l, respectively. Confocal microscopy yielded 91 +/- 17 fl for RBC volume. In conclusion, DHM in combination with a decoupling procedure allows measuring noninvasively volume, refractive index, and hemoglobin content of single-living RBCs with a high accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital holographic microscopy (DHM) allows optical-path-difference (OPD) measurements with nanometric accuracy. OPD induced by transparent cells depends on both the refractive index (RI) of cells and their morphology. This Letter presents a dual-wavelength DHM that allows us to separately measure both the RI and the cellular thickness by exploiting an enhanced dispersion of the perfusion medium achieved by the utilization of an extracellular dye. The two wavelengths are chosen in the vicinity of the absorption peak of the dye, where the absorption is accompanied by a significant variation of the RI as a function of the wavelength.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to develop applications for z;isual interpretation of medical images, the early detection and evaluation of microcalcifications in digital mammograms is verg important since their presence is often associated with a high incidence of breast cancers. Accurate classification into benign and malignant groups would help improve diagnostic sensitivity as well as reduce the number of unnecessa y biopsies. The challenge here is the selection of the useful features to distinguish benign from malignant micro calcifications. Our purpose in this work is to analyse a microcalcification evaluation method based on a set of shapebased features extracted from the digitised mammography. The segmentation of the microcalcifications is performed using a fixed-tolerance region growing method to extract boundaries of calcifications with manually selected seed pixels. Taking into account that shapes and sizes of clustered microcalcifications have been associated with a high risk of carcinoma based on digerent subjective measures, such as whether or not the calcifications are irregular, linear, vermiform, branched, rounded or ring like, our efforts were addressed to obtain a feature set related to the shape. The identification of the pammeters concerning the malignant character of the microcalcifications was performed on a set of 146 mammograms with their real diagnosis known in advance from biopsies. This allowed identifying the following shape-based parameters as the relevant ones: Number of clusters, Number of holes, Area, Feret elongation, Roughness, and Elongation. Further experiments on a set of 70 new mammogmms showed that the performance of the classification scheme is close to the mean performance of three expert radiologists, which allows to consider the proposed method for assisting the diagnosis and encourages to continue the investigation in the sense of adding new features not only related to the shape

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews a study to determine the usefulness of signal processing along with lipreading in improving speech perception of profoundly hearing impaired persons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A highly stable microvolt amplifier for use with atmospheric broadband thermopile radiometers is described. The amplifier has a nominal gain of 500, for bipolar input signals in the range +/- 10 mV from a floating source. The noise level at the input is less than 5 mu V (at 100 k Omega input impedance), permitting instantaneous diffuse solar radiation measurements to 0.5 W m(-2) resolution with 12 bit analog to digital conversion. The temperature stability of gain is better than 5 ppm/degrees C (-4 to 20 degrees C). Averaged over a decade of use, the long term drift of the amplifier gain is less than similar to 0.02%/yr. As well as radiometers measuring solar and terrestrial radiations, the amplifier has also been successfully used with low level signals from thermocouples and ground heat flux plates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An information processor for rendering input data compatible with standard video recording and/or display equipment, comprizing means for digitizing the input data over periods which are synchronous with the fields of a standard video signal, a store adapted to store the digitized data and release stored digitized data in correspondence wiht the line scan of a standard video monitor, the store having two halves which correspond to the interlaced fields of a standard video signal and being so arranged that one half is filed while the other is emptied, and means for converting the released stored digitized data into video luminance signals. The input signals may be in digital or analogue form. A second stage which reconstitutes the recorded data is also described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new nonlinear digital baseband predistorter design is introduced based on direct learning, together with a new Wiener system modeling approach for the high power amplifiers (HPA) based on the B-spline neural network. The contribution is twofold. Firstly, by assuming that the nonlinearity in the HPA is mainly dependent on the input signal amplitude the complex valued nonlinear static function is represented by two real valued B-spline neural networks, one for the amplitude distortion and another for the phase shift. The Gauss-Newton algorithm is applied for the parameter estimation, in which the De Boor recursion is employed to calculate both the B-spline curve and the first order derivatives. Secondly, we derive the predistorter algorithm calculating the inverse of the complex valued nonlinear static function according to B-spline neural network based Wiener models. The inverse of the amplitude and phase shift distortion are then computed and compensated using the identified phase shift model. Numerical examples have been employed to demonstrate the efficacy of the proposed approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this thesis is to discuss the development and modeling of an interface architecture to be employed for interfacing analog signals in mixed-signal SOC. We claim that the approach that is going to be presented is able to achieve wide frequency range, and covers a large range of applications with constant performance, allied to digital configuration compatibility. Our primary assumptions are to use a fixed analog block and to promote application configurability in the digital domain, which leads to a mixed-signal interface. The use of a fixed analog block avoids the performance loss common to configurable analog blocks. The usage of configurability on the digital domain makes possible the use of all existing tools for high level design, simulation and synthesis to implement the target application, with very good performance prediction. The proposed approach utilizes the concept of frequency translation (mixing) of the input signal followed by its conversion to the ΣΔ domain, which makes possible the use of a fairly constant analog block, and also, a uniform treatment of input signal from DC to high frequencies. The programmability is performed in the ΣΔ digital domain where performance can be closely achieved according to application specification. The interface performance theoretical and simulation model are developed for design space exploration and for physical design support. Two prototypes are built and characterized to validate the proposed model and to implement some application examples. The usage of this interface as a multi-band parametric ADC and as a two channels analog multiplier and adder are shown. The multi-channel analog interface architecture is also presented. The characterization measurements support the main advantages of the approach proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)