952 resultados para signal analysis
Resumo:
It is a known fact that noise analysis is a suitable method for sensor performance surveillance. In particular, controlling the response time of a sensor is an efficient way to anticipate failures and to have the opportunity to prevent them. In this work the response times of several sensors of Trillo NPP are estimated by means of noise analysis. The procedure applied consists of modeling each sensor with autoregressive methods and getting the searched parameter by analyzing the response of the model when a ramp is simulated as the input signal. Core exit thermocouples and in core self-powered neutron detectors are the main sensors analyzed but other plant sensors are studied as well. Since several measurement campaigns have been carried out, it has been also possible to analyze the evolution of the estimated parameters during more than one fuel cycle. Some sensitivity studies for the sample frequency of the signals and its influence on the response time are also included. Calculations and analysis have been done in the frame of a collaboration agreement between Trillo NPP operator (CNAT) and the School of Mines of Madrid.
Resumo:
Atrial fibrillation (AF) is a common heart disorder. One of the most prominent hypothesis about its initiation and maintenance considers multiple uncoordinated activation foci inside the atrium. However, the implicit assumption behind all the signal processing techniques used for AF, such as dominant frequency and organization analysis, is the existence of a single regular component in the observed signals. In this paper we take into account the existence of multiple foci, performing a spectral analysis to detect their number and frequencies. In order to obtain a cleaner signal on which the spectral analysis can be performed, we introduce sparsity-aware learning techniques to infer the spike trains corresponding to the activations. The good performance of the proposed algorithm is demonstrated both on synthetic and real data. RESUMEN. Algoritmo basado en técnicas de regresión dispersa para la extracción de las señales cardiacas en pacientes con fibrilación atrial (AF).
Resumo:
The type of signals obtained has conditioned chaos analysis tools. Almost in every case, they have analogue characteristics. But in certain cases, a chaotic digital signal is obtained and theses signals need a different approach than conventional analogue ones. The main objective of this paper will be to present some possible approaches to the study of this signals and how information about their characteristics may be obtained in the more straightforward possible way. We have obtained digital chaotic signals from an Optical Logic Cell with some feedback between output and one of the possible control gates. This chaos has been reported in several papers and its characteristics have been employed as a possible method to secure communications and as a way to encryption. In both cases, the influence of some perturbation in the transmission medium gave problems both for the synchronization of chaotic generators at emitter and receiver and for the recovering of information data. A proposed way to analyze the presence of some perturbation is to study the noise contents of transmitted signal and to implement a way to eliminate it. In our present case, the digital signal will be converted to a multilevel one by grouping bits in packets of 8 bits and applying conventional methods of time-frequency analysis to them. The results give information about the change in signals characteristics and hence some information about the noise or perturbations present. Equivalent representations to the phase and to the Feigenbaum diagrams for digital signals are employed in this case.
Resumo:
Connectivity analysis on diffusion MRI data of the whole-brain suffers from distortions caused by the standard echo-planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a “theoretically correct” and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.
Resumo:
Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable.
Resumo:
We extend in this paper some previous results concerning the differential-algebraic index of hybrid models of electrical and electronic circuits. Specifically, we present a comprehensive index characterization which holds without passivity requirements, in contrast to previous approaches, and which applies to nonlinear circuits composed of uncoupled, one-port devices. The index conditions, which are stated in terms of the forest structure of certain digraph minors, do not depend on the specific tree chosen in the formulation of the hybrid equations. Additionally, we show how to include memristors in hybrid circuit models; in this direction, we extend the index analysis to circuits including active memristors, which have been recently used in the design of nonlinear oscillators and chaotic circuits. We also discuss the extension of these results to circuits with controlled sources, making our framework of interest in the analysis of circuits with transistors, amplifiers, and other multiterminal devices.
Resumo:
Conventional dual-rail precharge logic suffers from difficult implementations of dual-rail structure for obtaining strict compensation between the counterpart rails. As a light-weight and high-speed dual-rail style, balanced cell-based dual-rail logic (BCDL) uses synchronised compound gates with global precharge signal to provide high resistance against differential power or electromagnetic analyses. BCDL can be realised from generic field programmable gate array (FPGA) design flows with constraints. However, routings still exist as concerns because of the deficient flexibility on routing control, which unfavourably results in bias between complementary nets in security-sensitive parts. In this article, based on a routing repair technique, novel verifications towards routing effect are presented. An 8 bit simplified advanced encryption processing (AES)-co-processor is executed that is constructed on block random access memory (RAM)-based BCDL in Xilinx Virtex-5 FPGAs. Since imbalanced routing are major defects in BCDL, the authors can rule out other influences and fairly quantify the security variants. A series of asymptotic correlation electromagnetic (EM) analyses are launched towards a group of circuits with consecutive routing schemes to be able to verify routing impact on side channel analyses. After repairing the non-identical routings, Mutual information analyses are executed to further validate the concrete security increase obtained from identical routing pairs in BCDL.
Resumo:
We present a novel framework for the analysis and optimization of encoding latency for multiview video. Firstly, we characterize the elements that have an influence in the encoding latency performance: (i) the multiview prediction structure and (ii) the hardware encoder model. Then, we provide algorithms to find the encoding latency of any arbitrary multiview prediction structure. The proposed framework relies on the directed acyclic graph encoder latency (DAGEL) model, which provides an abstraction of the processing capacity of the encoder by considering an unbounded number of processors. Using graph theoretic algorithms, the DAGEL model allows us to compute the encoding latency of a given prediction structure, and determine the contribution of the prediction dependencies to it. As an example of DAGEL application, we propose an algorithm to reduce the encoding latency of a given multiview prediction structure up to a target value. In our approach, a minimum number of frame dependencies are pruned, until the latency target value is achieved, thus minimizing the degradation of the rate-distortion performance due to the removal of the prediction dependencies. Finally, we analyze the latency performance of the DAGEL derived prediction structures in multiview encoders with limited processing capacity.
Resumo:
Understanding the radio signal transmission characteristics in the environment where the telerobotic application is sought is a key part of achieving a reliable wireless communication link between a telerobot and a control station. In this paper, wireless communication requirements and a case study of a typical telerobotic application in an underground facility at CERN are presented. Then, the theoretical and experimental characteristics of radio propagation are investigated with respect to time, distance, location and surrounding objects. Based on analysis of the experimental findings, we show how a commercial wireless system, such as Wi-Fi, can be made suitable for a case study application at CERN.
Resumo:
The analysis of the interference modes has an increasing application, especially in the field of optical biosensors. In this type of sensors, the displacement Δν of the interference modes of the transduction signal is observed when a particular biological agent is placed over the biosensor. In order to measure this displacement, the position of a maximum (or a minimum) of the signal must be detected before and after placing the agent over the sensor. A parameter of great importance for this kind of sensors is the period Pν of the signal, which is inversely proportional to the optical thickness h0 of the sensor in the absence of the biological agent. The increase of this period improves the sensitivity of the sensor but it worsens the detection of the maximum. In this paper, authors analyze the propagation of uncertainties in these sensors when using least squares techniques for the detection of the maxima (or minima) of the signal. Techniques described in supplement 2 of the ISO-GUM Guide are used. The result of the analysis allows a metrological educated answer to the question of which is the optimal period Pν of the signal. El análisis del comportamiento de los modos de interferencia tiene una aplicación cada vez más amplia, especialmente en el campo de los biosensores ópticos. En este tipo de sensores se observa el desplazamiento Δν de los modos de interferencia de la señal de transducción al reconocer un de-terminado agente biológico. Para medir ese desplazamiento se debe detectar la posición de un máximo o mínimo de la señal antes y después de dicho desplazamiento. En este tipo de biosensores un parámetro de gran importancia es el periodo Pν de la señal el cual es inversamente proporcional al espesor óptico h0 del sensor en ausencia de agente biológico. El aumento de dicho periodo mejora la sensibilidad del sensor pero parece dificultar la detección del mínimo o máximo. Por tanto, su efecto sobre la incertidumbre del resultado de la medida presenta dos efectos contrapuestos: la mejora de la sensibilidad frente a la dificultad creciente en la detección del mínimo ó máximo. En este trabajo, los autores analizan la propagación de incertidumbres en estos sensores utilizando herramientas de ajuste por MM.CC. para la detección de los mínimos o máximos de la señal y técnicas de propagación de incertidumbres descritas en el suplemento 2 de la Guía ISO-GUM. El resultado del análisis permite dar una respuesta, justificada desde el punto de vista metrológico, de en que condiciones es conveniente o no aumentar el periodo Pν de la señal.
Resumo:
The calibration results of one anemometer equipped with several rotors, varying their size, were analyzed. In each case, the 30-pulses pert turn output signal of the anemometer was studied using Fourier series decomposition and correlated with the anemometer factor (i.e., the anemometer transfer function). Also, a 3-cup analytical model was correlated to the data resulting from the wind tunnel measurements. Results indicate good correlation between the post-processed output signal and the working condition of the cup anemometer. This correlation was also reflected in the results from the proposed analytical model. With the present work the possibility of remotely checking cup anemometer status, indicating the presence of anomalies and, therefore, a decrease on the wind sensor reliability is revealed.
Resumo:
An analytical study of cepstral peak prominence (CPP) is presented, intended to provide an insight into its meaning and relation with voice perturbation parameters. To carry out this analysis, a parametric approach is adopted in which voice production is modelled using the traditional source-filter model and the first cepstral peak is assumed to have Gaussian shape. It is concluded that the meaning of CPP is very similar to that of the first rahmonic and some insights are provided on its dependence with fundamental frequency and vocal tract resonances. It is further shown that CPP integrates measures of voice waveform and periodicity perturbations, be them either amplitude, frequency or noise.
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
In recent years, Independent Components Analysis (ICA) has proven itself to be a powerful signal-processing technique for solving the Blind-Source Separation (BSS) problems in different scientific domains. In the present work, an application of ICA for processing NIR hyperspectral images to detect traces of peanut in wheat flour is presented. Processing was performed without a priori knowledge of the chemical composition of the two food materials. The aim was to extract the source signals of the different chemical components from the initial data set and to use them in order to determine the distribution of peanut traces in the hyperspectral images. To determine the optimal number of independent component to be extracted, the Random ICA by blocks method was used. This method is based on the repeated calculation of several models using an increasing number of independent components after randomly segmenting the matrix data into two blocks and then calculating the correlations between the signals extracted from the two blocks. The extracted ICA signals were interpreted and their ability to classify peanut and wheat flour was studied. Finally, all the extracted ICs were used to construct a single synthetic signal that could be used directly with the hyperspectral images to enhance the contrast between the peanut and the wheat flours in a real multi-use industrial environment. Furthermore, feature extraction methods (connected components labelling algorithm followed by flood fill method to extract object contours) were applied in order to target the spatial location of the presence of peanut traces. A good visualization of the distributions of peanut traces was thus obtained
Resumo:
Esta tesis doctoral, que es la culminación de mis estudios de doctorado impartidos por el Departamento de Lingüística Aplicada a la Ciencia y a la Tecnología de la Universidad Politécnica de Madrid, aborda el análisis del uso de la matización (hedging) en el lenguaje legal inglés siguiendo los postulados y principios de la análisis crítica de género (Bhatia, 2004) y empleando las herramientas de análisis de córpora WordSmith Tools versión 6 (Scott, 2014). Como refleja el título, el estudio se centra en la descripción y en el análisis contrastivo de las variedades léxico-sintácticas de los matizadores del discurso (hedges) y las estrategias discursivas que con ellos se llevan a cabo, además de las funciones que éstas desempeñan en un corpus de sentencias del Tribunal Supremo de EE. UU., y de artículos jurídicos de investigación americanos, relacionando, en la medida posible, éstas con los rasgos determinantes de los dos géneros, desde una perspectiva socio-cognitiva. El elemento innovador que ofrece es que, a pesar de los numerosos estudios que se han podido realizar sobre los matizadores del discurso en el inglés general (Lakoff, 1973; Hübler, 1983; Clemen, 1997; Markkanen and Schröder, 1997; Mauranen, 1997; Fetzer 2010; y Finnegan, 2010 entre otros) académico (Crompton, 1997; Meyer, 1997; Skelton, 1997; Martín Butragueňo, 2003) científico (Hyland, 1996a, 1996c, 1998c, 2007; Grabe and Kaplan, 1997; Salager-Meyer, 1997 Varttala, 2001) médico (Prince, 1982; Salager-Meyer, 1994; Skelton, 1997), y, en menor medida el inglés legal (Toska, 2012), no existe ningún tipo de investigación que vincule los distintos usos de la matización a las características genéricas de las comunicaciones profesionales. Dentro del lenguaje legal, la matización confirma su dependencia tanto de las expectativas a macro-nivel de la comunidad de discurso, como de las intenciones a micro-nivel del escritor de la comunicación, variando en función de los propósitos comunicativos del género ya sean éstos educativos, pedagógicos, interpersonales u operativos. El estudio pone de relieve el uso predominante de los verbos modales epistémicos y de los verbos léxicos como matizadores del discurso, estos últimos divididos en cuatro tipos (Hyland 1998c; Palmer 1986, 1990, 2001) especulativos, citativos, deductivos y sensoriales. La realización léxico-sintáctica del matizador puede señalar una de cuatro estrategias discursivas particulares (Namsaraev, 1997; Salager-Meyer, 1994), la indeterminación, la despersonalización, la subjectivisación, o la matización camuflada (camouflage hedging), cuya incidencia y función varia según género. La identificación y cuantificación de los distintos matizadores y estrategias empleados en los diferentes géneros del discurso legal puede tener implicaciones pedagógicos para los estudiantes de derecho no nativos que tienen que demostrar una competencia adecuada en su uso y procesamiento. ABSTRACT This doctoral thesis, which represents the culmination of my doctoral studies undertaken in the Department of Linguistics Applied to Science and Technology of the Universidad Politécnica de Madrid, focusses on the analysis of hedging in legal English following the principles of Critical Genre Analysis (Bhatia, 2004), and using WordSmith Tools version 6 (Scott, 2014) corpus analysis tools. As the title suggests, this study centers on the description and contrastive analysis of lexico-grammatical realizations of hedges and the discourse strategies which they can indicate, as well as the functions they can carry out, in a corpus of U.S. Supreme Court opinions and American law review articles. The study relates realization, incidence and function of hedging to the predominant generic characteristics of the two genres from a socio-cognitive perspective. While there have been numerous studies on hedging in general English (Lakoff, 1973; Hübler, 1983; Clemen, 1997; Markkanen and Schröder, 1997; Mauranen, 1997; Fetzer 2010; and Finnegan, 2010 among others) academic English (Crompton, 1997; Meyer, 1997; Skelton, 1997; Martín Butragueňo, 2003) scientific English (Hyland, 1996a, 1996c, 1998c, 2007; Grabe and Kaplan, 1997; Salager-Meyer, 1997 Varttala, 2001) medical English (Prince, 1982; Salager-Meyer, 1994; Skelton, 1997), and, to a lesser degree, legal English (Toska, 2012), this study is innovative in that it links the different realizations and functions of hedging to the generic characteristics of a particular professional communication. Within legal English, hedging has been found to depend on not only the macro-level expectations of the discourse community for a specific genre, but also on the micro-level intentions of the author of a communication, varying according to the educational, pedagogical, interpersonal or operative purposes the genre may have. The study highlights the predominance of epistemic modal verbs and lexical verbs as hedges, dividing the latter into four types (Hyland, 1998c; Palmer, 1986, 1990, 2001): speculative, quotative, deductive and sensorial. Lexical-grammatical realizations of hedges can signal one of four discourse strategies (Namsaraev, 1997; Salager-Meyer, 1994), indetermination, depersonalization, subjectivization and camouflage hedging, as well as fulfill a variety of functions. The identification and quantification of the different hedges and hedging strategies and functions in the two genres may have pedagogical implications for non-native law students who must demonstrate adequate competence in the production and interpretation of hedged discourse.