747 resultados para Galois lattices


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotopes of sedimentary nitrogen and organic carbon are widely used as proxy variables for biogeochemical parameters and processes in the water column. In order to investigate alterations of the primary isotopic signal by sedimentary diagenetic processes, we determined concentrations and isotopic compositions of inorganic nitrogen (IN), organic nitrogen (ON), total nitrogen (TN), and total organic carbon (TOC) on one short core recovered from sediments of the eastern subtropical Atlantic, between the Canary Islands and the Moroccan coast. Changes with depth in concentration and isotopic composition of the different fractions were related to early diagenetic conditions indicated by pore water concentrations of oxygen, nitrate, and ammonium. Additionally, the nature of the organic matter was investigated by Rock-Eval pyrolysis and microscopic analysis. A decrease in ON during aerobic organic matter degradation is accompanied by an increase of the 15N/14N ratio. Changes in the isotopic composition of ON can be described by Rayleigh fractionation kinetics which are probably related to microbial metabolism. The influence of IN depleted in 15N on the bulk sedimentary (TN) isotope signal increases due to organic matter degradation, compensating partly the isotopic changes in ON. In anoxic sediments, fixation of ammonium between clay lattices results in a decrease of stable nitrogen isotope ratio of IN and TN. Changes in the carbon isotopic composition of TOC have to be explained by Rayleigh fractionation in combination with different remineralization kinetics of organic compounds with different isotopic composition. We have found no evidence for preferential preservation of terrestrial organic carbon. Instead, both TOC and refractory organic carbon are dominated by marine organic matter. Refractory organic carbon is depleted in 13C compared to TOC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In basalts and volcanogenic sediments from the Indian Ocean, the successive stages of submarine alteration of volcanic rocks and glasses give rise to the incorporation or the relative increase of iron in smectite lattices. During the first stage, the Mg-smectites are the most abundant; they are occasionally associated with Al-smectites. Afterwards, they are gradually replaced by iron-rich smectites. The REE distribution follows the same trend as the mineralogical changes. During the f'trst stage of alteration, REE distribution in clay minerals is the same as in the fresh glasses but, when the iron-rich smectites increase, the Ce has a specific behaviour. The Ce shows a positive anomaly in iron-rich smectites formed early in palagonitized glasses, and a negative one in authigenic smectites formed later from solutions in equilibrium with seawater.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an alternative Forward Error Correction scheme, based on Reed-Solomon codes, with the aim of protecting the transmission of RTP-multimedia streams: the inter-packet symbol approach. This scheme is based on an alternative bit structure that allocates each symbol of the Reed-Solomon code in several RTP-media packets. This characteristic permits to exploit better the recovery capability of Reed-Solomon codes against bursty packet losses. The performance of our approach has been studied in terms of encoding/decoding time versus recovery capability, and compared with other proposed schemes in the literature. The theoretical analysis has shown that our approach allows the use of a lower size of the Galois Fields compared to other solutions. This lower size results in a decrease of the required encoding/decoding time while keeping a comparable recovery capability. Finally, experimental results have been carried out to assess the performance of our approach compared to other schemes in a simulated environment, where models for wireless and wireline channels have been considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, label-free biosensing for antibody screening by periodic lattices of high-aspect ratio SU-8 nano-pillars (BICELLs) is presented. As a demonstration, the determination of anti-gestrinone antibodies from whole rabbit serum is carried out, and for the first time, the dissociation constant (KD = 6 nM) of antigen-antibody recognition process is calculated using this sensing system. After gestrinone antigen immobilization on the BICELLs, the immunorecognition was performed. The cells were interrogated vertically by using micron spot size Fourier transform visible and IR spectrometry (FT-VIS-IR), and the dip wavenumber shift was monitored. The biosensing assay exhibited good reproducibility and sensitivity (LOD = 0.75 ng/mL).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a novel approach to phonotactic LID, where instead of using soft-counts based on phoneme lattices, we use posteriogram to obtain n-gram counts. The high-dimensional vectors of counts are reduced to low-dimensional units for which we adapted the commonly used term i-vectors. The reduction is based on multinomial subspace modeling and is designed to work in the total-variability space. The proposed technique was tested on the NIST 2009 LRE set with better results to a system based on using soft-counts (Cavg on 30s: 3.15% vs 3.43%), and with very good results when fused with an acoustic i-vector LID system (Cavg on 30s acoustic 2.4% vs 1.25%). The proposed technique is also compared with another low dimensional projection system based on PCA. In comparison with the original soft-counts, the proposed technique provides better results, reduces the problems due to sparse counts, and avoids the process of using pruning techniques when creating the lattices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este proyecto se analizan las características y el ciclo de diseño asociado al entorno de CAD IspLEVER, de Lattice Semiconductor, con la finalidad de evaluar su adecuación a la docencia relacionada con la ingeniería de sistemas digitales cableados. En base a este estudio se realiza una guía del manejo de las diferentes herramientas que se integran en el entorno. Además, se realiza la caracterización de una serie de familias de dispositivos del fabricante Lattice Semiconductor que pudiera servir de apoyo a la hora de elegir un dispositivo de este fabricante para la realización de un determinado diseño. Para dar comienzo a la realización del estudio del entorno y de las herramientas que integra IspLEVER, se procedió a la familiarización con el marco de trabajo. Esta familiarización se realizó, en un principio, a través de la lectura de la documentación ofrecida por el fabricante en su página web, http://www.latticesemi.com. Tras esta lectura, que sirvió para tener una primera visión de las características de la herramienta, se procedió a la descarga del paquete de instalación; el fabricante ofrece una versión de evaluación que expira a los 12 meses. Una vez descargado, se instaló y para terminar con los preparativos, se pasó el procedimiento de obtención de la licencia. Con ello se consiguió tener el software preparado para su utilización. A continuación se emplearon horas de trabajo para, sin documentación alguna, tratar de crear diseños; con este trabajo se pretendía detectar lo intuitivo que resulta el entorno cuando se tienen conocimientos de herramientas de CAD electrónico. Tras esta primera toma de contacto con el entorno real, se procedió al estudio de las diferentes opciones que ofrece para la realización de diseños, ya sean lógicos o físicos. Además del estudio de todas las posibilidades que ofrece el entorno, el trabajo se focalizó en la detección y comparación de las distintas opciones que ofrece para realizar una misma tarea, como ocurre con la asignación de pines o con la revisión de los resultados de una simulación, entre otras. Entrelazado con el estudio de las opciones que ofrece el entorno, se realizó el estudio de las distintas herramientas de trabajo integradas en el mismo. Una vez estudiado el entorno y las herramientas, se procedió a la realización del tutorial. Se capturaron todas las imágenes que se consideraron apropiadas para que al alumno le resultase cómodo y fácil seguir todas las indicaciones que el tutorial ofrece, para la realización de un ciclo de diseño lógico completo. Tras la realización del tutorial, se procedió a revisar la amplia documentación que el fabricante ofrece de cada una de las distintas familias de dispositivos que fabrica. El fin de esta revisión no fue otro que realizar una caracterización de las distintas familias, que pudiera servir de apoyo a la hora de elegir un dispositivo de este fabricante para la realización de un determinado diseño. Este estudio de las familias de dispositivos del fabricante, también se realizó para detectar qué familia de dispositivos era la más idónea para incluir uno de sus miembros en una hipotética placa de prototipado, para la realización de prácticas de laboratorio. ABSTRACT. This project consists in the analysis of the characteristics and the design cycle associated with the IspLEVER environment of CAD, by Lattice Semiconductor. The objective of that analysis is to evaluate their suitability for teaching engineering related to wired digital systems. Based on this analysis a guide was made for managing the different tools that are integrated into the environment. In addition, the characterization of several families by the manufacturer Lattice Semiconductor was made, with the objective that it could be used to support the choice of a Lattice’s device to perform a certain design. To start the IspLEVER environment and tools study, I began with a familiarization with the environment. This familiarization consisted in a study of the manufacturer documentation offered in their web page, http://www.latticesemi.com. After that, I had a general view about the characteristics of the environment and environment tools. Then I continued downloading the installation package. The manufacturer offers an evaluation version that expires in the period of one year. After that download, the environment was installed. Finally, the licensing procedure was followed to finish with the preparations. Then, the software was prepared for its utilization. Following, several work hours were wasted without documentation, trying to create designs. This work has been to identify how intuitive the environment is when you have knowledge of electronic CAD tools. After this first point of contact with the real environment, I proceeded to study different offered options, by the manufacturer, for the realization of either logical or physical designs. In addition to studying all the possibilities offered by the environment, the work is focused on the detection and comparison of the various options offered to perform the same task, as with the pin assignment or reviewing the results of a simulation… At the same time, the environment tools were studied. At this point, I began creating the tutorial. I captured all the figures that I consider important to make it easy to the students. The tutorial contains a complete logical design cycle. When the tutorial was finished, I started to review the manufacturer documentation about each devices family. The purpose of this review was to characterize the different families to support the device selection in future designs. Another purpose of that characterization was focused on the detection of the best family to include one of its members in a prototyping board for conducting laboratory practices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La finalidad última do codificación y decodificación es conseguir que el mensaje reconstituido sea idéntico al original. Sin la teoría de códigos los mensajes binarios se caracterizan por vectores o también por polinomios con coeficientes pertenecientes al cuerpo dé Galois GF [0,l]. Sobre los conceptos de código, código lineal, código cíclico,generación polinómica de códigos, distancia, síndrome, relaciones con los elementos de un cuerpo finito, detección y corrección, etc., el mejor autor de referencia sigue siendo Peterson

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous work of the research group [1-4] demonstrated the viability of using periodic lattices of micro and nanopillars, called Bio-photonic sensing Cells (BICELLs), as an optical biosensor vertically characterized by visible spectrometry. Also we have studied theoretically [5] the performance of the BICELLs by 2D and 3D simulation in orde r to optimize the biosensing response. In this work we present the fabrication and biosensing comparison of different geometrical parameters on periodic lattices of pillars in order to discuss theoretical conclusions with these results. In this way, we have explored the biosensing response of other patter ns such as crosses, stars, cylinders, concentrical cylinders (Figure 1). Also we introduced a novel method to test the BICELLs in a cost-effective way by using an ultra-thin film of SU-8 spin-coated onto the patterns to reproduce the effect of a biofilm attached to the biosensor surface. Finally we have tested the biosensing response of the different geometries by the well-known Bovine Serum Albumin (BSA) immunoassay and compared with the theoretical simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Para el proyecto y cálculo de estructuras metálicas, fundamentalmente pórticos y celosías de cubierta, la herramienta más comúnmente utilizada son los programas informáticos de nudos y barras. En estos programas se define la geometría y sección de las barras, cuyas características mecánicas son perfectamente conocidas, y sobre las cuales obtenemos unos resultados de cálculo concretos en cuanto a estados tensionales y de deformación. Sin embargo el otro componente del modelo, los nudos, presenta mucha mayor complejidad a la hora de establecer sus propiedades mecánicas, fundamentalmente su rigidez al giro, así como de obtener unos resultados de estados tensionales y de deformación en los mismos. Esta “ignorancia” sobre el comportamiento real de los nudos, se salva generalmente asimilando a los nudos del modelo la condición de rígidos o articulados. Si bien los programas de cálculo ofrecen la posibilidad de introducir nudos con una rigidez intermedia (nudos semirrígidos), la rigidez de cada nudo dependerá de la geometría real de la unión, lo cual, dada la gran variedad de geometrías de uniones que en cualquier proyecto se nos presentan, hace prácticamente inviable introducir los coeficientes correspondientes a cada nudo en los modelos de nudos y barras. Tanto el Eurocódigo como el CTE, establecen que cada unión tendrá asociada una curva momento-rotación característica, que deberá ser determinada por los proyectistas mediante herramientas de cálculo o procedimientos experimentales. No obstante, este es un planteamiento difícil de llevar a cabo para cada proyecto. La consecuencia de esto es, que en la práctica, se realizan extensas comprobaciones y justificaciones de cálculo para las barras de las estructuras, dejando en manos de la práctica común la solución y puesta en obra de las uniones, quedando sin justificar ni comprobar la seguridad y el comportamiento real de estas. Otro aspecto que conlleva la falta de caracterización de las uniones, es que desconocemos como afecta el comportamiento real de éstas en los estados tensionales y de deformación de las barras que acometen a ellas, dudas que con frecuencia nos asaltan, no sólo en la fase de proyecto, sino también a la hora de resolver los problemas de ejecución que inevitablemente se nos presentan en el desarrollo de las obras. El cálculo mediante el método de los elementos finitos, es una herramienta que nos permite introducir la geometría real de perfiles y uniones, y nos permite por tanto abordar el comportamiento real de las uniones, y que está condicionado por su geometría. Por ejemplo, un caso típico es el de la unión de una viga a una placa o a un soporte soldando sólo el alma. Es habitual asimilar esta unión a una articulación. Sin embargo, el modelo por elementos finitos nos ofrece su comportamiento real, que es intermedio entre articulado y empotrado, ya que se transmite un momento y el giro es menor que el del apoyo simple. No obstante, la aplicación del modelo de elementos finitos, introduciendo la geometría de todos los elementos estructurales de un entramado metálico, tampoco resulta en general viable desde un punto de vista práctico, dado que requiere invertir mucho tiempo en comparación con el aumento de precisión que obtenemos respecto a los programas de nudos y barras, mucho más rápidos en la fase de modelización de la estructura. En esta tesis se ha abordado, mediante la modelización por elementos finitos, la resolución de una serie de casos tipo representativos de las uniones más comúnmente ejecutadas en obras de edificación, como son las uniones viga-pilar, estableciendo el comportamiento de estas uniones en función de las variables que comúnmente se presentan, y que son: •Ejecución de uniones viga-pilar soldando solo por el alma (unión por el alma), o bien soldando la viga al pilar por todo su perímetro (unión total). •Disposición o no de rigidizadores en los pilares •Uso de pilares de sección 2UPN en cajón o de tipo HEB, que son los tipos de soporte utilizados en casi el 100% de los casos en edificación. Para establecer la influencia de estas variables en el comportamiento de las uniones, y su repercusión en las vigas, se ha realizado un análisis comparativo entre las variables de resultado de los casos estudiados:•Estados tensionales en vigas y uniones. •Momentos en extremo de vigas •Giros totales y relativos en nudos. •Flechas. Otro de los aspectos que nos permite analizar la casuística planteada, es la valoración, desde un punto de vista de costos de ejecución, de la realización de uniones por todo el perímetro frente a las uniones por el alma, o de la disposición o no de rigidizadores en las uniones por todo el perímetro. Los resultados a este respecto, son estrictamente desde un punto de vista económico, sin perjuicio de que la seguridad o las preferencias de los proyectistas aconsejen una solución determinada. Finalmente, un tercer aspecto que nos ha permitido abordar el estudio planteado, es la comparación de resultados que se obtienen por el método de los elementos finitos, más próximos a la realidad, ya que se tiene en cuenta los giros relativos en las uniones, frente a los resultados obtenidos con programas de nudos y barras. De esta forma, podemos seguir usando el modelo de nudos y barras, más versátil y rápido, pero conociendo cuáles son sus limitaciones, y en qué aspectos y en qué medida, debemos ponderar sus resultados. En el último apartado de la tesis se apuntan una serie de temas sobre los que sería interesante profundizar en posteriores estudios, mediante modelos de elementos finitos, con el objeto de conocer mejor el comportamiento de las uniones estructurales metálicas, en aspectos que no se pueden abordar con los programas de nudos y barras. For the project and calculation of steel structures, mainly building frames and cover lattices, the tool more commonly used are the node and bars model computer programs. In these programs we define the geometry and section of the bars, whose mechanical characteristics are perfectly known, and from which we obtain the all calculation results of stresses and displacements. Nevertheless, the other component of the model, the nodes, are much more difficulty for establishing their mechanical properties, mainly the rotation fixity coefficients, as well as the stresses and displacements. This "ignorance" about the real performance of the nodes, is commonly saved by assimilating to them the condition of fixed or articulated. Though the calculation programs offer the possibility to introducing nodes with an intermediate fixity (half-fixed nodes), the fixity of every node will depend on the real connection’s geometry, which, given the great variety of connections geometries that in a project exist, makes practically unviable to introduce the coefficients corresponding to every node in the models of nodes and bars. Both Eurocode and the CTE, establish that every connection will have a typical moment-rotation associated curve, which will have to be determined for the designers by calculation tools or experimental procedures. Nevertheless, this one is an exposition difficult to carry out for each project. The consequence of this, is that in the practice, in projects are extensive checking and calculation reports about the bars of the structures, trusting in hands of the common practice the solution and execution of the connections, resulting without justification and verification their safety and their real behaviour. Another aspect that carries the lack of the connections characterization, is that we don´t know how affects the connections real behaviour in the stresses and displacements of the bars that attack them, doubts that often assault us, not only in the project phase, but also at the moment of solving the execution problems that inevitably happen in the development of the construction works. The calculation by finite element model is a tool that allows us to introduce the real profiles and connections geometry, and allows us to know about the real behaviour of the connections, which is determined by their geometry. Typical example is a beam-plate or beam-support connection welding only by the web. It is usual to assimilate this connection to an articulation or simple support. Nevertheless, the finite element model determines its real performance, which is between articulated and fixed, since a moment is transmitted and the relative rotation is less than the articulation’s rotation. Nevertheless, the application of the finite element model, introducing the geometry of all the structural elements of a metallic structure, does not also turn out to be viable from a practical point of view, provided that it needs to invest a lot of time in comparison with the precision increase that we obtain opposite the node and bars programs, which are much more faster in the structure modelling phase. In this thesis it has been approached, by finite element modelling, the resolution of a representative type cases of the connections commonly used in works of building, since are the beam-support connections, establishing the performance of these connections depending on the variables that commonly are present, which are: •Execution of beam-support connections welding only the web, or welding the beam to the support for the whole perimeter. •Disposition of stiffeners in the supports •Use 2UPN in box section or HEB section, which are the support types used in almost 100% building cases. To establish the influence of these variables in the connections performance, and the repercussion in the beams, a comparative analyse has been made with the resulting variables of the studied cases: •Stresses states in beams and connections. •Bending moments in beam ends. •Total and relative rotations in nodes. •Deflections in beams. Another aspect that the study allows us to analyze, is the valuation, from a costs point of view, of the execution of connections for the whole perimeter opposite to the web connections, or the execution of stiffeners. The results of this analyse, are strictly from an economic point of view, without prejudice that the safety or the preferences of the designers advise a certain solution. Finally, the third aspect that the study has allowed us to approach, is the comparison of the results that are obtained by the finite element model, nearer to the real behaviour, since the relative rotations in the connections are known, opposite to the results obtained with nodes and bars programs. So that, we can use the nodes and bars models, more versatile and quick, but knowing which are its limitations, and in which aspects and measures, we must weight the results. In the last part of the tesis, are relationated some of the topics on which it would be interesting to approach in later studies, with finite elements models, in order to know better the behaviour of the structural steel connections, in aspects that cannot be approached by the nodes and bars programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Logic programming (LP) is a family of high-level programming languages which provides high expressive power. With LP, the programmer writes the properties of the result and / or executable specifications instead of detailed computation steps. Logic programming systems which feature tabled execution and constraint logic programming have been shown to increase the declarativeness and efficiency of Prolog, while at the same time making it possible to write very expressive programs. Tabled execution avoids infinite failure in some cases, while improving efficiency in programs which repeat computations. CLP reduces the search tree and brings the power of solving (in)equations over arbitrary domains. Similarly to the LP case, CLP systems can also benefit from the power of tabling. Previous implementations which take ful advantage of the ideas behind tabling (e.g., forcing suspension, answer subsumption, etc. wherever it is necessary to avoid recomputation and terminate whenever possible) did not offer a simple, well-documented, easy-to-understand interface. This would be necessary to make the integratation of arbitrary CLP solvers into existing tabling systems possible. This clearly hinders a more widespread usage of the combination of both facilities. In this thesis we examine the requirements that a constraint solver must fulfill in order to be interfaced with a tabling system. We propose and implement a framework, which we have called Mod TCLP, with a minimal set of operations (e.g., entailment checking and projection) which the constraint solver has to provide to the tabling engine. We validate the design of Mod TCLP by a series of use cases: we re-engineer a previously existing tabled constrain domain (difference constraints) which was connected in an ad-hoc manner with the tabling engine in Ciao Prolog; we integrateHolzbauer’s CLP(Q) implementationwith Ciao Prolog’s tabling engine; and we implement a constraint solver over (finite) lattices. We evaluate its performance with several benchmarks that implement a simple abstract interpreter whose fixpoint is reached by means of tabled execution, and whose domain operations are handled by the constraint over (finite) lattices, where TCLP avoids recomputing subsumed abstractions.---ABSTRACT---La programación lógica con restricciones (CLP) y la tabulación son extensiones de la programación lógica que incrementan la declaratividad y eficiencia de Prolog, al mismo tiempo que hacen posible escribir programasmás expresivos. Las implementaciones anteriores que integran completamente ambas extensiones, incluyendo la suspensión de la ejecución de objetivos siempre que sea necesario, la implementación de inclusión (subsumption) de respuestas, etc., en todos los puntos en los que sea necesario para evitar recomputaciones y garantizar la terminación cuando sea posible, no han proporcionan una interfaz simple, bien documentada y fácil de entender. Esta interfaz es necesaria para permitir integrar resolutores de CLP arbitrarios en el sistema de tabulación. Esto claramente dificulta un uso más generalizado de la integración de ambas extensiones. En esta tesis examinamos los requisitos que un resolutor de restricciones debe cumplir para ser integrado con un sistema de tabulación. Proponemos un esquema (y su implementación), que hemos llamadoMod TCLP, que requiere un reducido conjunto de operaciones (en particular, y entre otras, entailment y proyección de almacenes de restricciones) que el resolutor de restricciones debe ofrecer al sistema de tabulación. Hemos validado el diseño de Mod TCLP con una serie de casos de uso: la refactorización de un sistema de restricciones (difference constraints) previamente conectado de un modo ad-hoc con la tabulación de Ciao Prolog; la integración del sistema de restricciones CLP(Q) de Holzbauer; y la implementación de un resolutor de restricciones sobre retículos finitos. Hemos evaluado su rendimiento con varios programas de prueba, incluyendo la implementación de un intérprete abstracto que alcanza su punto fijo mediante el sistema de tabulación y en el que las operaciones en el dominio son realizadas por el resolutor de restricciones sobre retículos (finitos) donde TCLP evita la recomputación de valores abstractos de las variables ya contenidos en llamadas anteriores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present here an information reconciliation method and demonstrate for the first time that it can achieve efficiencies close to 0.98. This method is based on the belief propagation decoding of non-binary LDPC codes over finite (Galois) fields. In particular, for convenience and faster decoding we only consider power-of-two Galois fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las patologías de la voz se han transformado en los últimos tiempos en una problemática social con cierto calado. La contaminación de las ciudades, hábitos como el de fumar, el uso de aparatos de aire acondicionado, etcétera, contribuyen a ello. Esto alcanza más relevancia en profesionales que utilizan su voz de manera frecuente, como, por ejemplo, locutores, cantantes, profesores o teleoperadores. Por todo ello resultan de especial interés las técnicas de ayuda al diagnóstico que son capaces de extraer conclusiones clínicas a partir de una muestra de la voz grabada con un micrófono, frente a otras invasivas que implican la exploración utilizando laringoscopios, fibroscopios o videoendoscopios, técnicas en cualquier caso mucho más molestas para los pacientes al exigir la introducción parcial del instrumental citado por la garganta, en actuaciones consideradas de tipo quirúrgico. Dentro de aquellas técnicas se ha avanzado mucho en un período de tiempo relativamente corto. En lo que se refiere al diagnóstico de patologías, hemos pasado en los últimos quince años de trabajar principalmente con parámetros extraídos de la señal de voz –tanto en el dominio del tiempo como en el de la frecuencia– y con escalas elaboradas con valoraciones subjetivas realizadas por expertos a hacerlo también con parámetros procedentes de estimaciones de la fuente glótica. La importancia de utilizar la fuente glótica reside, a grandes rasgos, en que se trata de una señal vinculada directamente al estado de la estructura laríngea del locutor y también en que está generalmente menos influida por el tracto vocal que la señal de voz. Es conocido que el tracto vocal guarda más relación con el mensaje hablado, y su presencia dificulta el proceso de detección de patología vocal. Estas estimaciones de la fuente glótica han sido obtenidas a través de técnicas de filtrado inverso desarrolladas por nuestro grupo de investigación. Hemos conseguido, además, profundizar en la naturaleza de la señal glótica: somos capaces de descomponerla y relacionarla con parámetros biomecánicos de los propios pliegues vocales, obteniendo estimaciones de elementos como la masa, la pérdida de energía o la elasticidad del cuerpo y de la cubierta del pliegue, entre otros. De las componentes de la fuente glótica surgen también los denominados parámetros biométricos, relacionados con la forma de la señal, que constituyen por sí mismos una firma biométrica del individuo. También trabajaremos con parámetros temporales, relacionados con las diferentes etapas que se observan dentro de la señal glótica durante un ciclo de fonación. Por último, consideraremos parámetros clásicos de perturbación y energía de la señal. En definitiva, contamos ahora con una considerable cantidad de parámetros glóticos que conforman una base estadística multidimensional, destinada a ser capaz de discriminar personas con voces patológicas o disfónicas de aquellas que no presentan patología en la voz o con voces sanas o normofónicas. Esta tesis doctoral se ocupa de varias cuestiones: en primer lugar, es necesario analizar cuidadosamente estos nuevos parámetros, por lo que ofreceremos una completa descripción estadística de los mismos. También estudiaremos cuestiones como la distribución de los parámetros atendiendo a criterios como el de normalidad estadística de los mismos, ocupándonos especialmente de la diferencia entre las distribuciones que presentan sujetos sanos y sujetos con patología vocal. Para todo ello emplearemos diferentes técnicas estadísticas: generación de elementos y diagramas descriptivos, pruebas de normalidad y diversos contrastes de hipótesis, tanto paramétricos como no paramétricos, que considerarán la diferencia entre los grupos de personas sanas y los grupos de personas con alguna patología relacionada con la voz. Además, nos interesa encontrar relaciones estadísticas entre los parámetros, de cara a eliminar posibles redundancias presentes en el modelo, a reducir la dimensionalidad del problema y a establecer un criterio de importancia relativa en los parámetros en cuanto a su capacidad discriminante para el criterio patológico/sano. Para ello se aplicarán técnicas estadísticas como la Correlación Lineal Bivariada y el Análisis Factorial basado en Componentes Principales. Por último, utilizaremos la conocida técnica de clasificación Análisis Discriminante, aplicada a diferentes combinaciones de parámetros y de factores, para determinar cuáles de ellas son las que ofrecen tasas de acierto más prometedoras. Para llevar a cabo la experimentación se ha utilizado una base de datos equilibrada y robusta formada por doscientos sujetos, cien de ellos pertenecientes al género femenino y los restantes cien al género masculino, con una proporción también equilibrada entre los sujetos que presentan patología vocal y aquellos que no la presentan. Una de las aplicaciones informáticas diseñada para llevar a cabo la recogida de muestras también es presentada en esta tesis. Los distintos estudios estadísticos realizados nos permitirán identificar aquellos parámetros que tienen una mayor contribución a la hora de detectar la presencia de patología vocal. Alguno de los estudios, además, nos permitirá presentar una ordenación de los parámetros en base a su importancia para realizar la detección. Por otra parte, también concluiremos que en ocasiones es conveniente realizar una reducción de la dimensionalidad de los parámetros para mejorar las tasas de detección. Por fin, las propias tasas de detección constituyen quizá la conclusión más importante del trabajo. Todos los análisis presentes en el trabajo serán realizados para cada uno de los dos géneros, de acuerdo con diversos estudios previos que demuestran que los géneros masculino y femenino deben tratarse de forma independiente debido a las diferencias orgánicas observadas entre ambos. Sin embargo, en lo referente a la detección de patología vocal contemplaremos también la posibilidad de trabajar con la base de datos unificada, comprobando que las tasas de acierto son también elevadas. Abstract Voice pathologies have become recently in a social problem that has reached a certain concern. Pollution in cities, smoking habits, air conditioning, etc. contributes to it. This problem is more relevant for professionals who use their voice frequently: speakers, singers, teachers, actors, telemarketers, etc. Therefore techniques that are capable of drawing conclusions from a sample of the recorded voice are of particular interest for the diagnosis as opposed to other invasive ones, involving exploration by laryngoscopes, fiber scopes or video endoscopes, which are techniques much less comfortable for patients. Voice quality analysis has come a long way in a relatively short period of time. In regard to the diagnosis of diseases, we have gone in the last fifteen years from working primarily with parameters extracted from the voice signal (both in time and frequency domains) and with scales drawn from subjective assessments by experts to produce more accurate evaluations with estimates derived from the glottal source. The importance of using the glottal source resides broadly in that this signal is linked to the state of the speaker's laryngeal structure. Unlike the voice signal (phonated speech) the glottal source, if conveniently reconstructed using adaptive lattices, may be less influenced by the vocal tract. As it is well known the vocal tract is related to the articulation of the spoken message and its influence complicates the process of voice pathology detection, unlike when using the reconstructed glottal source, where vocal tract influence has been almost completely removed. The estimates of the glottal source have been obtained through inverse filtering techniques developed by our research group. We have also deepened into the nature of the glottal signal, dissecting it and relating it to the biomechanical parameters of the vocal folds, obtaining several estimates of items such as mass, loss or elasticity of cover and body of the vocal fold, among others. From the components of the glottal source also arise the so-called biometric parameters, related to the shape of the signal, which are themselves a biometric signature of the individual. We will also work with temporal parameters related to the different stages that are observed in the glottal signal during a cycle of phonation. Finally, we will take into consideration classical perturbation and energy parameters. In short, we have now a considerable amount of glottal parameters in a multidimensional statistical basis, designed to be able to discriminate people with pathologic or dysphonic voices from those who do not show pathology. This thesis addresses several issues: first, a careful analysis of these new parameters is required, so we will offer a complete statistical description of them. We will also discuss issues such as distribution of the parameters, considering criteria such as their statistical normality. We will take special care in the analysis of the difference between distributions from healthy subjects and the distributions from pathological subjects. To reach these goals we will use different statistical techniques such as: generation of descriptive items and diagramas, tests for normality and hypothesis testing, both parametric and nonparametric. These latter techniques consider the difference between the groups of healthy subjects and groups of people with an illness related to voice. In addition, we are interested in finding statistical relationships between parameters. There are various reasons behind that: eliminate possible redundancies in the model, reduce the dimensionality of the problem and establish a criterion of relative importance in the parameters. The latter reason will be done in terms of discriminatory power for the criterion pathological/healthy. To this end, statistical techniques such as Bivariate Linear Correlation and Factor Analysis based on Principal Components will be applied. Finally, we will use the well-known technique of Discriminant Analysis classification applied to different combinations of parameters and factors to determine which of these combinations offers more promising success rates. To perform the experiments we have used a balanced and robust database, consisting of two hundred speakers, one hundred of them males and one hundred females. We have also used a well-balanced proportion where subjects with vocal pathology as well as subjects who don´t have a vocal pathology are equally represented. A computer application designed to carry out the collection of samples is also presented in this thesis. The different statistical analyses performed will allow us to determine which parameters contribute in a more decisive way in the detection of vocal pathology. Therefore, some of the analyses will even allow us to present a ranking of the parameters based on their importance for the detection of vocal pathology. On the other hand, we will also conclude that it is sometimes desirable to perform a dimensionality reduction in order to improve the detection rates. Finally, detection rates themselves are perhaps the most important conclusion of the work. All the analyses presented in this work have been performed for each of the two genders in agreement with previous studies showing that male and female genders should be treated independently, due to the observed functional differences between them. However, with regard to the detection of vocal pathology we will consider the possibility of working with the unified database, ensuring that the success rates obtained are also high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

βarrestins mediate the desensitization of the β2-adrenergic receptor (β2AR) and many other G protein-coupled receptors (GPCRs). Additionally, βarrestins initiate the endocytosis of these receptors via clathrin coated-pits and interact directly with clathrin. Consequently, it has been proposed that βarrestins serve as clathrin adaptors for the GPCR family by linking these receptors to clathrin lattices. AP-2, the heterotetrameric clathrin adaptor protein, has been demonstrated to mediate the internalization of many types of plasma membrane proteins other than GPCRs. AP-2 interacts with the clathrin heavy chain and cytoplasmic domains of receptors such as those for epidermal growth factor and transferrin. In the present study we demonstrate the formation of an agonist-induced multimeric complex containing a GPCR, βarrestin 2, and the β2-adaptin subunit of AP-2. β2-Adaptin binds βarrestin 2 in a yeast two-hybrid assay and coimmunoprecipitates with βarrestins and β2AR in an agonist-dependent manner in HEK-293 cells. Moreover, β2-adaptin translocates from the cytosol to the plasma membrane in response to the β2AR agonist isoproterenol and colocalizes with β2AR in clathrin-coated pits. Finally, expression of βarrestin 2 minigene constructs containing the β2-adaptin interacting region inhibits β2AR endocytosis. These findings point to a role for AP-2 in GPCR endocytosis, and they suggest that AP-2 functions as a clathrin adaptor for the endocytosis of diverse classes of membrane receptors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the crystal structures of the copper and nickel complexes of RNase A. The overall topology of these two complexes is similar to that of other RNase A structures. However, there are significant differences in the mode of binding of copper and nickel. There are two copper ions per molecule of the protein, but there is only one nickel ion per molecule of the protein. Significant changes occur in the interprotein interactions as a result of differences in the coordinating groups at the common binding site around His-105. Consequently, the copper- and nickel-ion-bound dimers of RNase A act as nucleation sites for generating different crystal lattices for the two complexes. A second copper ion is present at an active site residue His-119 for which all the ligands are from one molecule of the protein. At this second site, His-119 adopts an inactive conformation (B) induced by the copper. We have identified a novel copper binding motif involving the α-amino group and the N-terminal residues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Haptokinetic cell migration across surfaces is mediated by adhesion receptors including β1 integrins and CD44 providing adhesion to extracellular matrix (ECM) ligands such as collagen and hyaluronan (HA), respectively. Little is known, however, about how such different receptor systems synergize for cell migration through three-dimensionally (3-D) interconnected ECM ligands. In highly motile human MV3 melanoma cells, both β1 integrins and CD44 are abundantly expressed, support migration across collagen and HA, respectively, and are deposited upon migration, whereas only β1 integrins but not CD44 redistribute to focal adhesions. In 3-D collagen lattices in the presence or absence of HA and cross-linking chondroitin sulfate, MV3 cell migration and associated functions such as polarization and matrix reorganization were blocked by anti-β1 and anti-α2 integrin mAbs, whereas mAbs blocking CD44, α3, α5, α6, or αv integrins showed no effect. With use of highly sensitive time-lapse videomicroscopy and computer-assisted cell tracking techniques, promigratory functions of CD44 were excluded. 1) Addition of HA did not increase the migratory cell population or its migration velocity, 2) blocking of the HA-binding Hermes-1 epitope did not affect migration, and 3) impaired migration after blocking or activation of β1 integrins was not restored via CD44. Because α2β1-mediated migration was neither synergized nor replaced by CD44–HA interactions, we conclude that the biophysical properties of 3-D multicomponent ECM impose more restricted molecular functions of adhesion receptors, thereby differing from haptokinetic migration across surfaces.