974 resultados para Multiple attenuation. Deconvolution. Seismic processing
Resumo:
Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.
Resumo:
The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenetic zonation) and its magnitude dependence (without using Gutenberg–Richter's law). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg–Richter statistics and a zoned approach. Three attenuation laws have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra.
Resumo:
In Operational Modal Analysis of structures we often have multiple time history records of vibrations measured at different time instants. This work presents a procedure for estimating the modal parameters of the structure processing all the records, that is, using all available information to obtain a single estimate of the modal parameters. The method uses Maximum Likelihood Estimation and the Expectation Maximization algorithm. Finally, it has been applied to various problems for both simulated and real structures and the results show the advantage of the joint analysis proposed.
Resumo:
Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms.
Resumo:
This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P, Z, N} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.
Resumo:
The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenic zonation) and its magnitude dependence (without using Gutenberg–Richter's relationship). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg–Richter statistics and a zoned approach. Three attenuation relationships have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
Several basic olfactory tasks must be solved by highly olfactory animals, including background suppression, multiple object separation, mixture separation, and source identification. The large number N of classes of olfactory receptor cells—hundreds or thousands—permits the use of computational strategies and algorithms that would not be effective in a stimulus space of low dimension. A model of the patterns of olfactory receptor responses, based on the broad distribution of olfactory thresholds, is constructed. Representing one odor from the viewpoint of another then allows a common description of the most important basic problems and shows how to solve them when N is large. One possible biological implementation of these algorithms uses action potential timing and adaptation as the “hardware” features that are responsible for effective neural computation.
Resumo:
We have investigated mRNA 3′-end-processing signals in each of six eukaryotic species (yeast, rice, arabidopsis, fruitfly, mouse, and human) through the analysis of more than 20,000 3′-expressed sequence tags. The use and conservation of the canonical AAUAAA element vary widely among the six species and are especially weak in plants and yeast. Even in the animal species, the AAUAAA signal does not appear to be as universal as indicated by previous studies. The abundance of single-base variants of AAUAAA correlates with their measured processing efficiencies. As found previously, the plant polyadenylation signals are more similar to those of yeast than to those of animals, with both common content and arrangement of the signal elements. In all species examined, the complete polyadenylation signal appears to consist of an aggregate of multiple elements. In light of these and previous results, we present a broadened concept of 3′-end-processing signals in which no single exact sequence element is universally required for processing. Rather, the total efficiency is a function of all elements and, importantly, an inefficient word in one element can be compensated for by strong words in other elements. These complex patterns indicate that effective tools to identify 3′-end-processing signals will require more than consensus sequence identification.
Resumo:
Delta functions as a cell nonautonomous membrane-bound ligand that binds to Notch, a cell-autonomous receptor, during cell fate specification. Interaction between Delta and Notch leads to signal transduction and elicitation of cellular responses. During our investigations to further understand the biochemical mechanism by which Delta signaling is regulated, we have identified four Delta isoforms in Drosophila embryonic and larval extracts. We have demonstrated that at least one of the smaller isoforms, Delta S, results from proteolysis. Using antibodies to the Delta extracellular and intracellular domains in colocalization experiments, we have found that at least three Delta isoforms exist in vivo, providing the first evidence that multiple forms of Delta exist during development. Finally, we demonstrate that Delta is a transmembrane ligand that can be taken up by Notch-expressing Drosophila cultured cells. Cell culture experiments imply that full-length Delta is taken up by Notch-expressing cells. We present evidence that suggests this uptake occurs by a nonphagocytic mechanism.
Resumo:
The Fas/APO-1-receptor associated cysteine protease Mch5 (MACH/FLICE) is believed to be the enzyme responsible for activating a protease cascade after Fas-receptor ligation, leading to cell death. The Fas-apoptotic pathway is potently inhibited by the cowpox serpin CrmA, suggesting that Mch5 could be the target of this serpin. Bacterial expression of proMch5 generated a mature enzyme composed of two subunits, which are derived from the precursor proenzyme by processing at Asp-227, Asp-233, Asp-391, and Asp-401. We demonstrate that recombinant Mch5 is able to process/activate all known ICE/Ced-3-like cysteine proteases and is potently inhibited by CrmA. This contrasts with the observation that Mch4, the second FADD-related cysteine protease that is also able to process/activate all known ICE/Ced-3-like cysteine proteases, is poorly inhibited by CrmA. These data suggest that Mch5 is the most upstream protease that receives the activation signal from the Fas-receptor to initiate the apoptotic protease cascade that leads to activation of ICE-like proteases (TX, ICE, and ICE-relIII), Ced-3-like proteases (CPP32, Mch2, Mch3, Mch4, and Mch6), and the ICH-1 protease. On the other hand, Mch4 could be a second upstream protease that is responsible for activation of the same protease cascade in CrmA-insensitive apoptotic pathways.
Resumo:
Human rhinoviruses, the most important etiologic agents of the common cold, are messenger-active single-stranded monocistronic RNA viruses that have evolved a highly complex cascade of proteolytic processing events to control viral gene expression and replication. Most maturation cleavages within the precursor polyprotein are mediated by rhinovirus 3C protease (or its immediate precursor, 3CD), a cysteine protease with a trypsin-like polypeptide fold. High-resolution crystal structures of the enzyme from three viral serotypes have been used for the design and elaboration of 3C protease inhibitors representing different structural and chemical classes. Inhibitors having α,β-unsaturated carbonyl groups combined with peptidyl-binding elements specific for 3C protease undergo a Michael reaction mediated by nucleophilic addition of the enzyme’s catalytic Cys-147, resulting in covalent-bond formation and irreversible inactivation of the viral protease. Direct inhibition of 3C proteolytic activity in virally infected cells treated with these compounds can be inferred from dose-dependent accumulations of viral precursor polyproteins as determined by SDS/PAGE analysis of radiolabeled proteins. Cocrystal-structure-assisted optimization of 3C-protease-directed Michael acceptors has yielded molecules having extremely rapid in vitro inactivation of the viral protease, potent antiviral activity against multiple rhinovirus serotypes and low cellular toxicity. Recently, one compound in this series, AG7088, has entered clinical trials.
Resumo:
The functional specialization and hierarchical organization of multiple areas in rhesus monkey auditory cortex were examined with various types of complex sounds. Neurons in the lateral belt areas of the superior temporal gyrus were tuned to the best center frequency and bandwidth of band-passed noise bursts. They were also selective for the rate and direction of linear frequency modulated sweeps. Many neurons showed a preference for a limited number of species-specific vocalizations (“monkey calls”). These response selectivities can be explained by nonlinear spectral and temporal integration mechanisms. In a separate series of experiments, monkey calls were presented at different spatial locations, and the tuning of lateral belt neurons to monkey calls and spatial location was determined. Of the three belt areas the anterolateral area shows the highest degree of specificity for monkey calls, whereas neurons in the caudolateral area display the greatest spatial selectivity. We conclude that the cortical auditory system of primates is divided into at least two processing streams, a spatial stream that originates in the caudal part of the superior temporal gyrus and projects to the parietal cortex, and a pattern or object stream originating in the more anterior portions of the lateral belt. A similar division of labor can be seen in human auditory cortex by using functional neuroimaging.
Resumo:
Multiple members of the ADAR (adenosine deaminases acting on RNA) gene family are involved in A-to-I RNA editing. It has been speculated that they may form a large multicomponent protein complex. Possible candidates for such complexes are large nuclear ribonucleoprotein (lnRNP) particles. The lnRNP particles consist mainly of four spliceosomal subunits that assemble together with the pre-mRNA to form a large particle and thus are viewed as the naturally assembled pre-mRNA processing machinery. Here we investigated the presence of ADARs in lnRNP particles by Western blot analysis using anti-ADAR antibodies and by indirect immunoprecipitation. Both ADAR1 and ADAR2 were found associated with the spliceosomal components Sm and SR proteins within the lnRNP particles. The two ADARs, associated with lnRNP particles, were enzymatically active in site-selective A-to-I RNA editing. We demonstrate the association of ADAR RNA editing enzymes with physiological supramolecular complexes, the lnRNP particles.
Resumo:
Although proteases related to the interleukin 1 beta-converting enzyme (ICE) are known to be essential for apoptotic execution, the number of enzymes involved, their substrate specificities, and their specific roles in the characteristic biochemical and morphological changes of apoptosis are currently unknown. These questions were addressed using cloned recombinant ICE-related proteases (IRPs) and a cell-free model system for apoptosis (S/M extracts). First, we compared the substrate specificities of two recombinant human IRPs, CPP32 and Mch2 alpha. Both enzymes cleaved poly-(ADP-ribose) polymerase, albeit with different efficiencies. Mch2 alpha also cleaved recombinant and nuclear lamin A at a conserved VEID decreases NG sequence located in the middle of the coiled-coil rod domain, producing a fragment that was indistinguishable from the lamin A fragment observed in S/M extracts and in apoptotic cells. In contrast, CPP32 did not cleave lamin A. The cleavage of lamin A by Mch2 alpha and by S/M extracts was inhibited by millimolar concentrations of Zn2+, which had a minimal effect on cleavage of poly (ADP-ribose) polymerase by CPP32 and by S/M extracts. We also found that N-(acetyltyrosinylvalinyl-N epsilon-biotinyllysyl)aspartic acid [(2,6-dimethylbenzoyl)oxy]methyl ketone, which derivatizes the larger subunit of active ICE, can affinity label up to five active IRPs in S/M extracts. Together, these observations indicate that the processing of nuclear proteins in apoptosis involves multiple IRPs having distinct preferences for their apoptosis-associated substrates.