989 resultados para digital signal processor
Resumo:
The multidimensional process of physical, psychological, and social change produced by population ageing affects not only the quality of life of elderly people but also of our societies. Some dimensions of population ageing grow and expand over time (e.g. knowledge of the world events, or experience in particular situations), while others decline (e.g. reaction time, physical and psychological strength, or other functional abilities like reduced speed and tiredness). Information and Communication Technologies (ICTs) can help elderly to overcome possible limitations due to ageing. As a particular case, biometrics can allow the development of new algorithms for early detection of cognitive impairments, by processing continuous speech, handwriting or other challenged abilities. Among all possibilities, digital applications (Apps) for mobile phones or tablets can allow the dissemination of such tools. In this article, after presenting and discussing the process of population ageing and its social implications, we explore how ICTs through different Apps can lead to new solutions for facing this major demographic challenge.
Resumo:
NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Ventricular late potentials are low-amplitude signals originating from damaged myocardium and detected on the body surface by ECG filtering and averaging. Digital filters present in commercial equipment may interfere with the ability of arrhythmia stratification. We compared 40-Hz BiSpec (BI) and classical 40- to 250-Hz band-pass Butterworth bidirectional (BD) filters in terms of impact on time domain variables and diagnostic properties. In a transverse retrospective age-adjusted case-control study, 221 subjects with sinus rhythm without bundle branch block were divided into three groups after signal-averaged ECG acquisition: GI (N = 40), clinically normal controls, GII (N = 158), subjects with coronary heart disease without sustained monomorphic ventricular tachycardia (SMVT), and GIII (N = 23), subjects with heart disease and documented SMVT. Conventional variables analyzed from vector magnitude data after averaging to 0.3 µV final noise were obtained by application of each filter to the averaged signal, and evaluated in pairs by numerical comparison and by diagnostic agreement assessment, using conventional and optimized thresholds of normality. Significant differences were found between BI and BD variables in all groups, with diagnostic results showing significant disagreement between both filters [kappa value of 0.61 (P<0.05) for GII and 0.31 for GIII (P = NS)]. Sensitivity for SMVT was lower with BI than with BD (65.2 vs 91.3%, respectively, P<0.05). Filters provided significantly different numerical and diagnostic results and the BI filter showed only limited clinical application to risk stratification of ventricular arrhythmia.
Resumo:
Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.
Resumo:
In order to develop applications for z;isual interpretation of medical images, the early detection and evaluation of microcalcifications in digital mammograms is verg important since their presence is often associated with a high incidence of breast cancers. Accurate classification into benign and malignant groups would help improve diagnostic sensitivity as well as reduce the number of unnecessa y biopsies. The challenge here is the selection of the useful features to distinguish benign from malignant micro calcifications. Our purpose in this work is to analyse a microcalcification evaluation method based on a set of shapebased features extracted from the digitised mammography. The segmentation of the microcalcifications is performed using a fixed-tolerance region growing method to extract boundaries of calcifications with manually selected seed pixels. Taking into account that shapes and sizes of clustered microcalcifications have been associated with a high risk of carcinoma based on digerent subjective measures, such as whether or not the calcifications are irregular, linear, vermiform, branched, rounded or ring like, our efforts were addressed to obtain a feature set related to the shape. The identification of the pammeters concerning the malignant character of the microcalcifications was performed on a set of 146 mammograms with their real diagnosis known in advance from biopsies. This allowed identifying the following shape-based parameters as the relevant ones: Number of clusters, Number of holes, Area, Feret elongation, Roughness, and Elongation. Further experiments on a set of 70 new mammogmms showed that the performance of the classification scheme is close to the mean performance of three expert radiologists, which allows to consider the proposed method for assisting the diagnosis and encourages to continue the investigation in the sense of adding new features not only related to the shape
Resumo:
This paper discusses the Nucleus 22 cochlear implant.
Resumo:
This paper reviews a study to determine the usefulness of signal processing along with lipreading in improving speech perception of profoundly hearing impaired persons.
Resumo:
Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.
Resumo:
A highly stable microvolt amplifier for use with atmospheric broadband thermopile radiometers is described. The amplifier has a nominal gain of 500, for bipolar input signals in the range +/- 10 mV from a floating source. The noise level at the input is less than 5 mu V (at 100 k Omega input impedance), permitting instantaneous diffuse solar radiation measurements to 0.5 W m(-2) resolution with 12 bit analog to digital conversion. The temperature stability of gain is better than 5 ppm/degrees C (-4 to 20 degrees C). Averaged over a decade of use, the long term drift of the amplifier gain is less than similar to 0.02%/yr. As well as radiometers measuring solar and terrestrial radiations, the amplifier has also been successfully used with low level signals from thermocouples and ground heat flux plates.
Resumo:
An information processor for rendering input data compatible with standard video recording and/or display equipment, comprizing means for digitizing the input data over periods which are synchronous with the fields of a standard video signal, a store adapted to store the digitized data and release stored digitized data in correspondence wiht the line scan of a standard video monitor, the store having two halves which correspond to the interlaced fields of a standard video signal and being so arranged that one half is filed while the other is emptied, and means for converting the released stored digitized data into video luminance signals. The input signals may be in digital or analogue form. A second stage which reconstitutes the recorded data is also described.
Resumo:
In this paper a new nonlinear digital baseband predistorter design is introduced based on direct learning, together with a new Wiener system modeling approach for the high power amplifiers (HPA) based on the B-spline neural network. The contribution is twofold. Firstly, by assuming that the nonlinearity in the HPA is mainly dependent on the input signal amplitude the complex valued nonlinear static function is represented by two real valued B-spline neural networks, one for the amplitude distortion and another for the phase shift. The Gauss-Newton algorithm is applied for the parameter estimation, in which the De Boor recursion is employed to calculate both the B-spline curve and the first order derivatives. Secondly, we derive the predistorter algorithm calculating the inverse of the complex valued nonlinear static function according to B-spline neural network based Wiener models. The inverse of the amplitude and phase shift distortion are then computed and compensated using the identified phase shift model. Numerical examples have been employed to demonstrate the efficacy of the proposed approaches.
Resumo:
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.