873 resultados para Noise removal in images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sin duda, el rostro humano ofrece mucha más información de la que pensamos. La cara transmite sin nuestro consentimiento señales no verbales, a partir de las interacciones faciales, que dejan al descubierto nuestro estado afectivo, actividad cognitiva, personalidad y enfermedades. Estudios recientes [OFT14, TODMS15] demuestran que muchas de nuestras decisiones sociales e interpersonales derivan de un previo análisis facial de la cara que nos permite establecer si esa persona es confiable, trabajadora, inteligente, etc. Esta interpretación, propensa a errores, deriva de la capacidad innata de los seres humanas de encontrar estas señales e interpretarlas. Esta capacidad es motivo de estudio, con un especial interés en desarrollar métodos que tengan la habilidad de calcular de manera automática estas señales o atributos asociados a la cara. Así, el interés por la estimación de atributos faciales ha crecido rápidamente en los últimos años por las diversas aplicaciones en que estos métodos pueden ser utilizados: marketing dirigido, sistemas de seguridad, interacción hombre-máquina, etc. Sin embargo, éstos están lejos de ser perfectos y robustos en cualquier dominio de problemas. La principal dificultad encontrada es causada por la alta variabilidad intra-clase debida a los cambios en la condición de la imagen: cambios de iluminación, oclusiones, expresiones faciales, edad, género, etnia, etc.; encontradas frecuentemente en imágenes adquiridas en entornos no controlados. Este de trabajo de investigación estudia técnicas de análisis de imágenes para estimar atributos faciales como el género, la edad y la postura, empleando métodos lineales y explotando las dependencias estadísticas entre estos atributos. Adicionalmente, nuestra propuesta se centrará en la construcción de estimadores que tengan una fuerte relación entre rendimiento y coste computacional. Con respecto a éste último punto, estudiamos un conjunto de estrategias para la clasificación de género y las comparamos con una propuesta basada en un clasificador Bayesiano y una adecuada extracción de características. Analizamos en profundidad el motivo de porqué las técnicas lineales no han logrado resultados competitivos hasta la fecha y mostramos cómo obtener rendimientos similares a las mejores técnicas no-lineales. Se propone un segundo algoritmo para la estimación de edad, basado en un regresor K-NN y una adecuada selección de características tal como se propuso para la clasificación de género. A partir de los experimentos desarrollados, observamos que el rendimiento de los clasificadores se reduce significativamente si los ´estos han sido entrenados y probados sobre diferentes bases de datos. Hemos encontrado que una de las causas es la existencia de dependencias entre atributos faciales que no han sido consideradas en la construcción de los clasificadores. Nuestro resultados demuestran que la variabilidad intra-clase puede ser reducida cuando se consideran las dependencias estadísticas entre los atributos faciales de el género, la edad y la pose; mejorando el rendimiento de nuestros clasificadores de atributos faciales con un coste computacional pequeño. Abstract Surely the human face provides much more information than we think. The face provides without our consent nonverbal cues from facial interactions that reveal our emotional state, cognitive activity, personality and disease. Recent studies [OFT14, TODMS15] show that many of our social and interpersonal decisions derive from a previous facial analysis that allows us to establish whether that person is trustworthy, hardworking, intelligent, etc. This error-prone interpretation derives from the innate ability of human beings to find and interpret these signals. This capability is being studied, with a special interest in developing methods that have the ability to automatically calculate these signs or attributes associated with the face. Thus, the interest in the estimation of facial attributes has grown rapidly in recent years by the various applications in which these methods can be used: targeted marketing, security systems, human-computer interaction, etc. However, these are far from being perfect and robust in any domain of problems. The main difficulty encountered is caused by the high intra-class variability due to changes in the condition of the image: lighting changes, occlusions, facial expressions, age, gender, ethnicity, etc.; often found in images acquired in uncontrolled environments. This research work studies image analysis techniques to estimate facial attributes such as gender, age and pose, using linear methods, and exploiting the statistical dependencies between these attributes. In addition, our proposal will focus on the construction of classifiers that have a good balance between performance and computational cost. We studied a set of strategies for gender classification and we compare them with a proposal based on a Bayesian classifier and a suitable feature extraction based on Linear Discriminant Analysis. We study in depth why linear techniques have failed to provide competitive results to date and show how to obtain similar performances to the best non-linear techniques. A second algorithm is proposed for estimating age, which is based on a K-NN regressor and proper selection of features such as those proposed for the classification of gender. From our experiments we note that performance estimates are significantly reduced if they have been trained and tested on different databases. We have found that one of the causes is the existence of dependencies between facial features that have not been considered in the construction of classifiers. Our results demonstrate that intra-class variability can be reduced when considering the statistical dependencies between facial attributes gender, age and pose, thus improving the performance of our classifiers with a reduced computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El director de cine y el arquitecto, exploran las posibilidades que les ofrece la imagen, en el sentido de la visualización de ciertas relaciones que son catalizadoras de las emociones. La obra cinematográfica y la obra arquitectónica son producto del pensamiento y contienen todos los procesos que las idearon, así como aquellos mecanismos necesarios para generar el espacio y la secuencia. El objetivo general de la tesis consiste en conocer las analogías existentes en el modo en que el arquitecto y el cineasta afrontan el proceso de creación de sus proyectos desde la acción gráfica. Si consideramos el medio gráfico como un recurso creativo en el proyecto arquitectónico y cinematográfico, el boceto, croquis o “storyboard” se convierten en documentos fundamentales, objeto de estudio, para extraer las claves y la manera en que se desarrolla dicho proceso. El pensamiento gráfico se encuentra íntimamente unido a su modo de expresión. El estudio de ese proceso de pensamiento basado principalmente en imágenes, permite establecer analogías en la manera en que ambos autores hacen uso del dibujo para imaginar acontecimientos evocadores de emociones que definan el carácter dramático de sus ideas. La conexión y yuxtaposición de imágenes mentales, como operaciones de montaje que alientan la construcción de ideas, conceptos y sensaciones, son claves en el curso de la concepción arquitectónica y cinematográfica, y el dibujo, una herramienta que permite a ambos autores su desarrollo. La aproximación al modo en que el arquitecto y el cineasta emprenden el proceso de ideación de sus proyectos a través del dibujo, se aborda desde las estrategias gráficas de cuatro autores: Sergei Eisenstein, Le Corbusier, Akira Kurosawa y Rem Koolhaas. La estructura del trabajo se desarrolla en dos bloques. El primero, compuesto por los primeros cuatro capítulos, afronta desde un punto de vista general, la potencialidad de la imagen en el pensamiento gráfico y el papel de la acción gráfica en el curso proyectual que realiza el arquitecto, por un lado, y el director de cine por otro, tratando de extraer las analogías y los puntos de encuentro de ambos autores durante el proceso. El segundo bloque, correspondiente a los cuatro últimos capítulos, aborda el uso del medio gráfico de manera más concreta, en la figura de dos de los arquitectos y dos de los cineastas más influyentes del pasado y del presente siglo, tratando de comprender el papel del dibujo en el desarrollo conceptual de su obra. ABSTRACT Both the movie director and the architect explore the potential offered by the image, as the visualization of certain relations that are catalyst of emotions. Cinematography and architecture works are a result of a thought and they include all the processes that created them, as much as the essential tools to generate the space and the sequence. This thesis aims to get an understanding of the analogies underlying in which architects and movie directors face the creation process of theirs projects from the graphic action. If we consider graphic media as a creative resource in the architectural and the cinematographic project, the sketch or storyboard becomes the fundamental documents, the study object, to decode the clues and the way in which the process unfolds. Graphic thinking comes across its way of expression. The study of this thinking, based mainly in images, let set up analogies in a way in which both authors use a drawing to imagine events of emotions that define the dramatic nature of their ideas. The connection and juxtaposition of mental images, as editing or montages that encourage the creation of ideas, concepts and sensations, are key in the course of the architectural and the cinematographic conception, and the drawing, a tool that allows both authors their development. The approach to the way the architect and the movie director get to the process of the creation of their projects through the drawing is addressed from the graphic strategies of these four authors: Sergei Eisenstein, Le Corbusier, Akira Kurosawa and Rem Koolhaas. The structure of the work is developed in two blocks. The first one, the first four chapters, face up, from a general point of view, the potential of the image in the graphic thinking and the role of the graphic action in the course of the project, that architects, as well as movie directors, are making trying to find the analogies and common points during the process. The second block, the last four chapters, deal with the use of the graphic media in a more detail manner, taking as example two of the architects and two of the movie directors more influential of the past and the present centuries, trying to understand the role of the drawing in the conceptual development of the their work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El control, o cancelación activa de ruido, consiste en la atenuación del ruido presente en un entorno acústico mediante la emisión de una señal igual y en oposición de fase al ruido que se desea atenuar. La suma de ambas señales en el medio acústico produce una cancelación mutua, de forma que el nivel de ruido resultante es mucho menor al inicial. El funcionamiento de estos sistemas se basa en los principios de comportamiento de los fenómenos ondulatorios descubiertos por Augustin-Jean Fresnel, Christiaan Huygens y Thomas Young entre otros. Desde la década de 1930, se han desarrollado prototipos de sistemas de control activo de ruido, aunque estas primeras ideas eran irrealizables en la práctica o requerían de ajustes manuales cada poco tiempo que hacían inviable su uso. En la década de 1970, el investigador estadounidense Bernard Widrow desarrolla la teoría de procesado adaptativo de señales y el algoritmo de mínimos cuadrados LMS. De este modo, es posible implementar filtros digitales cuya respuesta se adapte de forma dinámica a las condiciones variables del entorno. Con la aparición de los procesadores digitales de señal en la década de 1980 y su evolución posterior, se abre la puerta para el desarrollo de sistemas de cancelación activa de ruido basados en procesado de señal digital adaptativo. Hoy en día, existen sistemas de control activo de ruido implementados en automóviles, aviones, auriculares o racks de equipamiento profesional. El control activo de ruido se basa en el algoritmo fxlms, una versión modificada del algoritmo LMS de filtrado adaptativo que permite compensar la respuesta acústica del entorno. De este modo, se puede filtrar una señal de referencia de ruido de forma dinámica para emitir la señal adecuada que produzca la cancelación. Como el espacio de cancelación acústica está limitado a unas dimensiones de la décima parte de la longitud de onda, sólo es viable la reducción de ruido en baja frecuencia. Generalmente se acepta que el límite está en torno a 500 Hz. En frecuencias medias y altas deben emplearse métodos pasivos de acondicionamiento y aislamiento, que ofrecen muy buenos resultados. Este proyecto tiene como objetivo el desarrollo de un sistema de cancelación activa de ruidos de carácter periódico, empleando para ello electrónica de consumo y un kit de desarrollo DSP basado en un procesador de muy bajo coste. Se han desarrollado una serie de módulos de código para el DSP escritos en lenguaje C, que realizan el procesado de señal adecuado a la referencia de ruido. Esta señal procesada, una vez emitida, produce la cancelación acústica. Empleando el código implementado, se han realizado pruebas que generan la señal de ruido que se desea eliminar dentro del propio DSP. Esta señal se emite mediante un altavoz que simula la fuente de ruido a cancelar, y mediante otro altavoz se emite una versión filtrada de la misma empleando el algoritmo fxlms. Se han realizado pruebas con distintas versiones del algoritmo, y se han obtenido atenuaciones de entre 20 y 35 dB medidas en márgenes de frecuencia estrechos alrededor de la frecuencia del generador, y de entre 8 y 15 dB medidas en banda ancha. ABSTRACT. Active noise control consists on attenuating the noise in an acoustic environment by emitting a signal equal but phase opposed to the undesired noise. The sum of both signals results in mutual cancellation, so that the residual noise is much lower than the original. The operation of these systems is based on the behavior principles of wave phenomena discovered by Augustin-Jean Fresnel, Christiaan Huygens and Thomas Young. Since the 1930’s, active noise control system prototypes have been developed, though these first ideas were practically unrealizable or required manual adjustments very often, therefore they were unusable. In the 1970’s, American researcher Bernard Widrow develops the adaptive signal processing theory and the Least Mean Squares algorithm (LMS). Thereby, implementing digital filters whose response adapts dynamically to the variable environment conditions, becomes possible. With the emergence of digital signal processors in the 1980’s and their later evolution, active noise cancellation systems based on adaptive signal processing are attained. Nowadays active noise control systems have been successfully implemented on automobiles, planes, headphones or racks for professional equipment. Active noise control is based on the fxlms algorithm, which is actually a modified version of the LMS adaptive filtering algorithm that allows compensation for the acoustic response of the environment. Therefore it is possible to dynamically filter a noise reference signal to obtain the appropriate cancelling signal. As the noise cancellation space is limited to approximately one tenth of the wavelength, noise attenuation is only viable for low frequencies. It is commonly accepted the limit of 500 Hz. For mid and high frequencies, conditioning and isolating passive techniques must be used, as they produce very good results. The objective of this project is to develop a noise cancellation system for periodic noise, by using consumer electronics and a DSP development kit based on a very-low-cost processor. Several C coded modules have been developed for the DSP, implementing the appropriate signal processing to the noise reference. This processed signal, once emitted, results in noise cancellation. The developed code has been tested by generating the undesired noise signal in the DSP. This signal is emitted through a speaker simulating the noise source to be removed, and another speaker emits an fxlms filtered version of the same signal. Several versions of the algorithm have been tested, obtaining attenuation levels around 20 – 35 dB measured in a tight bandwidth around the generator frequency, or around 8 – 15 dB measured in broadband.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta investigación aborda el estudio de la influencia de las uniones en el aislamiento acústico a ruido aéreo en los edificios y el análisis de las transmisiones indirectas producidas en particiones de dos hojas de ladrillo cerámico sobre bandas elásticas. La transmisión de ruido entre dos recintos separados por una partición se produce por dos vías: La vía directa a través del elemento que separa los dos recintos y la vía indirecta, a través de los elementos de flanco, como forjados, particiones, fachadas, etc. que conectados a dicho elemento de separación, vibran en presencia del campo acústico transmitiendo sus vibraciones al recinto receptor. Si las transmisiones indirectas son dominantes, el aislamiento acústico “in situ” puede ser menor que el esperado. El parámetro que expresa la atenuación acústica en las uniones es el índice de reducción vibracional en la unión o Kij. Éste parámetro se utiliza en los métodos de cálculo del aislamiento acústico a ruido aéreo, que permiten la justificación del cumplimiento de la normativa actual, el Documento Básico DB HR Protección frente al ruido del CTE. La determinación de los índices Kij de las uniones es una cuestión que debe abordarse de forma experimental. Existen diferentes expresiones empíricas obtenidas en varios laboratorios europeos que permiten el cálculo del índice Kij, pero no se han validado con ensayos realizados en soluciones habituales en la construcción española, como las estudiadas en este trabajo. El objetivo de este trabajo es la medida, análisis y cuantificación de las transmisiones indirectas producidas en las uniones de elementos de dos hojas de fábrica de ladrillo cerámico. Se ha recurrido a una campaña de ensayos en la que se reproducían las condiciones de un edificio real y se ha medido el aislamiento acústico a ruido aéreo y los índices Kij de diferentes configuraciones de uniones. Del análisis de los resultados, se demuestra que el aislamiento acústico a ruido aéreo depende de las uniones, pudiéndose obtener mejoras significativas al variar la forma de unión entre los elementos constructivos. Las mejoras de aislamiento acústico están relacionadas con un buen diseño de las uniones y con valores elevados del índice Kij. Este trabajo aporta valores experimentales de Kij para soluciones de fábrica de ladrillo y pone en discusión los valores teóricos que actualmente aparecen en la normativa vigente. ABSTRACT This research work deals with the effects of junction construction details on airborne sound insulation in buildings and the analysis of flanking transmissions across double ceramic brick walls with elastic interlayers. The sound transmission between two adjacent rooms comprises two paths: the direct path, caused by the incident sound on a separating wall, and the indirect path, through flanking elements connected to the separating wall, such as floors, internal walls, ceilings, etc. Flanking elements vibrate when excited in the sound field, thus transferring sound via structure borne to the receiving room. Dominant flanking transmissions can decrease the field sound insulation performance of the building. The vibration reduction index, Kij. expresses the acoustic attenuation of construction joints. This is an input parameter in the calculation models designed to estimate the airborne sound insulation between adjoining rooms in buildings. These models are used to comply with the acoustic requirements set by Basic Document DB HR Protection against noise included in the Spanish Building Code. The characterization of Kij. must be addressed experimentally by means of measurements. The available empirical Kij. formulae were developed in different European laboratories, but they have not been validated with standard tests performed on common Spanish walls, such as those studied in this work. The aim of this work is the measurement, analysis and quantification of indirect transmissions though joints of double ceramic brick walls. Airborne sound insulation tests and Kij measurements were performed in a laboratory which emulated the conditions of a real building. These tests were performed in different partitions whose joints were modified. It follows from the analysis of the results that airborne sound insulation depends strongly on the design of joints. Sound insulation improves when the joints between construction elements are modified to eliminate acoustic bridges. The increase in sound insulation corresponds to best practice design of joints and to high Kij values. This research work provides experimental Kij data of double ceramic brick walls and the results put in discussion the theoretical values set in the current Standards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient and safe heparin anticoagulation has remained a problem for continuous renal replacement therapies and intermittent hemodialysis for patients with acute renal failure. To make heparin therapy safer for the patient with acute renal failure at high risk of bleeding, we have proposed regional heparinization of the circuit via an immobilized heparinase I filter. This study tested a device based on Taylor-Couette flow and simultaneous separation/reaction for efficacy and safety of heparin removal in a sheep model. Heparinase I was immobilized onto agarose beads via cyanogen bromide activation. The device, referred to as a vortex flow plasmapheretic reactor, consisted of two concentric cylinders, a priming volume of 45 ml, a microporous membrane for plasma separation, and an outer compartment where the immobilized heparinase I was fluidized separately from the blood cells. Manual white cell and platelet counts, hematocrit, total protein, and fibrinogen assays were performed. Heparin levels were indirectly measured via whole-blood recalcification times (WBRTs). The vortex flow plasmapheretic reactor maintained significantly higher heparin levels in the extracorporeal circuit than in the sheep (device inlet WBRTs were 1.5 times the device outlet WBRTs) with no hemolysis. The reactor treatment did not effect any physiologically significant changes in complete blood cell counts, platelets, and protein levels for up to 2 hr of operation. Furthermore, gross necropsy and histopathology did not show any significant abnormalities in the kidney, liver, heart, brain, and spleen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inwardly rectifying K+ channel ROMK1 has been implicated as being significant in K+ secretion in the distal nephron. ROMK1 has been shown by immunocytochemistry to be expressed in relevant nephron segments. The development of the atomic force microscope has made possible the production of high resolution images of small particles, including a variety of biological macromolecules. Recently, a fusion protein of glutathione S-transferase (GST) and ROMK1 (ROMK1-GST) has been used to produce a polyclonal antibody for immunolocalization of ROMK1. We have used atomic force microscopy to examine ROMK1-GST and the native ROMK1 polypeptide cleaved from GST. Imaging was conducted with the proteins in physiological solutions attached to mica. ROMK1-GST appears in images as a particle composed of two units of similar size. Analyses of images indicate that the two units have volumes of approximately 118 nm3, which is close to the theoretical volume of a globular protein of approximately 65 kDa (the molecular mass of ROMK1-GST). Native GST exists as a dimer, and the images obtained here are consistent with the ROMK1-GST fusion protein's existence as a heterodimer. In experiments on ROMK1 in aqueous solution, single molecules appear to aggregate, but contact to the mica was maintained. Addition of ATP to the solution produced a change in height of the aggregates. This change (which was reversible) suggests that ATP induces a structural change in the ROMK1 protein. The data show that atomic force microscopy is a useful tool for examination of purified protein molecules under near-physiological conditions, and furthermore, that structural alterations in the proteins may be continuously investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oscillating electric fields can be rectified by proteins in cell membranes to give rise to a dc transport of a substance across the membrane or a net conversion of a substrate to a product. This provides a basis for signal averaging and may be important for understanding the effects of weak extremely low frequency (ELF) electric fields on cellular systems. We consider the limits imposed by thermal and "excess" biological noise on the magnitude and exposure duration of such electric field-induced membrane activity. Under certain circumstances, the excess noise leads to an increase in the signal-to-noise ratio in a manner similar to processes labeled "stochastic resonance." Numerical results indicate that it is difficult to reconcile biological effects with low field strengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A poluição sonora urbana, em especial a gerada por motocicletas com escapamentos modificados, afeta indistintamente a saúde de toda população de diversas maneiras e tende a aumentar, ao contrário da emissão de gases, que vem se reduzindo ao longo dos anos. Com o objetivo de conter o ruído gerado pelo tráfego urbano, vários países desenvolvem procedimentos, leis e ações mitigatórias como barreiras acústicas e asfaltos fonoabsorventes, porém há grande quantidade de motocicletas, veículo tipicamente de alto potencial de incômodo e ruidoso, que circulam com sistemas de escapamento adulterados e emitem ainda mais excesso de ruído. A inspeção veicular é ferramenta importante no controle de emissões de gases poluentes de veículos em uso, mas falha em restringir aqueles que ultrapassam os limites legais de ruído e, somado a isto, há o agravante de o condutor submeter-se a poluição sonora que ele mesmo produz. A fiscalização de rua surge como alternativa de controle ambiental, mas algumas vezes é contestada por ser subjetiva ou por faltar uma metodologia simples, confiável e eficaz. Buscou-se então compreender a relação entre o aumento do nível sonoro da motocicleta com escapamento modificado ao circular no trânsito e a emissão sonora medida na condição de inspeção, o chamado ruído parado, para trazer subsídios à formação de métodos mais eficazes de fiscalização e controle. Para isto foram avaliadas motocicletas quanto à emissão de ruído em circulação e ruído parado e os resultados obtidos apontam que os escapamentos modificados possuem nível sonoro muito mais elevado que os originais, com forte correspondência entre os dois métodos de medição. Esta poluição sonora atinge de modo particularmente intenso os profissionais, motoboys, que modificam suas motocicletas, pois eles se submetem a todos os fatores que favorecem a perda auditiva por excesso de ruído. Outras questões surgiram em paralelo ao tema principal e foram brevemente avaliadas para se compor o quadro geral, como o nível sonoro de escapamentos não originais avaliados segundo os procedimentos de homologação, a contribuição que a motocicleta traz ao ruído urbano e que resultados estes trazem quanto ao torque e potência da motocicleta. Estes estudos indicaram que a motocicleta modificada contribui fortemente para a poluição sonora urbana, afetando principalmente o condutor e sem trazer ganhos efetivos em termos de potência e dirigibilidade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O processo de nitrificação e desnitrificação simultâneas (NDS) permite alcançar a remoção combinada de matérias carbonácea e nitrogenada em uma única unidade. O reator de leito estruturado, com biomassa imobilizada e recirculação interna, apresenta características positivas para que estes processos envolvidos ocorram, tais como propiciar a formação de biofilme e evitar a colmatação do leito. Esta configuração tem sido estudada com êxito em reatores em escala de bancada para tratamento de esgoto. Nesta pesquisa foi utilizado um reator de leito estruturado em escala piloto com a finalidade de avaliar sua implantação, eficiência e estabilidade tratando esgoto doméstico em condições reais para futura aplicação em pequenas comunidades, condomínios residenciais entre outros como sistema descentralizado. O reator foi construído em fibra de vidro, de formato cilíndrico, com diâmetro interno de aproximadamente 0,80 m e 2,0 m de altura. O volume total foi de aproximadamente 0,905 m3 e o volume útil de 0,642 m3. A operação foi realizada sob condições de aeração contínua e intermitente e os tempos de detenção hidráulica (TDH) testados foram de 48, 36 e 24 horas. A remoção de DQO manteve-se acima de 90% com TDH de 48 e 36 horas. A melhor eficiência de remoção de nitrogênio total foi de 72,4 ± 6,4%, sob TDH de 48 horas e a aeração intermitente, com 2 horas de aeração e 1 hora não aerada. A concentração de oxigênio dissolvido (OD) média de 2,8 ± 0,5 mg.L-1 na fase aerada e temperatura média de 24,7 ± 1,0 °C. Nesse mesmo período, a eficiência média de remoção de DQO foi de 94 ± 4 %. Apesar das dificuldades apresentadas no controle da aeração, as eficiências das remoções obtidas indicaram que o reator de leito estruturado e aeração intermitente (LEAI) se apresenta como uma alternativa promissora em escala plena, requerendo ajustes para construção e incremento da estabilidade da NDS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo do experimento I foi avaliar a redução do tempo de permanência do dispositivo de P4 de 9 para 7 dias sob parâmetros reprodutivos de vacas Nelore. Foram utilizadas 674 vacas lactantes entre 40-60 dias pós parto que receberam no início do protocolo (d0) BE + CIDR. No momento da retirada do CIDR foi administrado PGF2 α, ECP e eCG. A IATF ocorreu 55 e 48 horas após a retirada do dispositivo nos tratamentos 7d-CIDR e 9d-CIDR, respectivamente. Dez dias após a IA foi realizada colheita de sangue para dosagem de P4 sérica e confirmação da ovulação. Vacas tratadas com 7d-CIDR apresentaram menor (p < 0,01) folículo ovulatório em relação ao 9d-CIDR. No entanto, a concentração de P4 pós-IA, taxas de ovulação, detecção de estro e prenhez não foram influenciadas pelo tempo de permanência do CIDR. Assim, o uso do CIDR por 7 dias promoveu desempenho reprodutivo semelhante em vacas Nelore comparado ao protocolo com 9 dias. O experimento II teve o objetivo de avaliar os efeitos da reutilização do CIDR por até 35 dias de uso em vacas e 42 dias em novilhas Nelore. Utilizou-se 749 vacas lactantes 40-60 dias pós parto e 92 novilhas púberes. No d0 os animais receberam BE + CIDR novo (CIDR1) ou previamente usado por 7 (CIDR2), 14 (CIDR3), 21 (CIDR4), 28 (CIDR5) e 35 (CIDR6) dias. No momento da retirada do CIDR (d7) foi administrado PGF2 α, ECP, eCG e exame de US para mensuração do maior folículo (FD), além de colheita de sangue para dosagem de P4. A IATF ocorreu 55 horas após a retirada do dispositivo. O diâmetro do FD foi maior (p < 0,01) de acordo com o maior número de usos do CIDR nas vacas, a concentração de P4 reduziu nos CIDRs reutilizados porém se mantiveram acima de 1,5 ng/ml e a taxa de prenhez não foi afetada pela reutilização do dispositivo por até 5 vezes em vacas e o sexto uso em novilhas. O protocolo com 7 dias de permanência permite a reutilização do CIDR por até 6 vezes mantendo a mesma eficiência reprodutiva. No experimento III o objetivo foi avaliar se a aplicação do eCG dois dias antes da retirada do dispositivo aumenta o tamanho do FO, CL e taxa de prenhez. Foram utilizadas 681 vacas lactantes 40-60 dias pós parto e 182 novilhas púberes. Os animais foram distribuídos em dois tratamentos com aplicação de eCG no quinto (5d-eCG) ou sétimo dia (7d-eCG). No d0, os animais receberam BE + CIDR e no dia 7 o CIDR foi retirado e administrado PGF2 α e ECP. Dez dias após a IA foi realizada US para mensuração do CL e colheita de sangue para dosagem de P4. A IATF ocorreu 55 horas após a retirada do dispositivo. O tratamento 5d-eCG aumentou (p < 0,01) o FO nas vacas em relação ao grupo 7deCG e o mesmo ocorreu nas novilhas. Em vacas, a concentração de P4 pós IA foi mais alta (p = 0,04) no 5d-eCG. Em novilhas o diâmetro do CL pós-IA foi maior (p < 0,01) no 5d-eCG. No entanto, a antecipação da aplicação do eCG foi eficiente em aumentar o folículo ovulatório no momento da IATF, mas não aumentou a taxa de prenhez

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. The rotational evolution of isolated neutron stars is dominated by the magnetic field anchored to the solid crust of the star. Assuming that the core field evolves on much longer timescales, the crustal field evolves mainly though Ohmic dissipation and the Hall drift, and it may be subject to relatively rapid changes with remarkable effects on the observed timing properties. Aims. We investigate whether changes of the magnetic field structure and strength during the star evolution may have observable consequences in the braking index n. This is the most sensitive quantity to reflect small variations of the timing properties that are caused by magnetic field rearrangements. Methods. We performed axisymmetric, long-term simulations of the magneto-thermal evolution of neutron stars with state-of-the-art microphysical inputs to calculate the evolution of the braking index. Relatively rapid magnetic field modifications can be expected only in the crust of neutron stars, where we focus our study. Results. We find that the effect of the magnetic field evolution on the braking index can be divided into three qualitatively different stages depending on the age and the internal temperature: a first stage that may be different for standard pulsars (with n ~ 3) or low field neutron stars that accreted fallback matter during the supernova explosion (systematically n < 3); in a second stage, the evolution is governed by almost pure Ohmic field decay, and a braking index n > 3 is expected; in the third stage, at late times, when the interior temperature has dropped to very low values, Hall oscillatory modes in the neutron star crust result in braking indices of a high absolute value and both positive and negative signs. Conclusions. Current magneto-thermal evolution models predict a large contribution to the timing noise and, in particular, to the braking index, from temporal variations of the magnetic field. Models with strong (≳ 1014 G) multipolar or toroidal components, even with a weak (~1012 G) dipolar field are consistent with the observed trend of the timing properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Boron-doped diamond electrodes have emerged as anodic material due to their high physical, chemical and electrochemical stability. These characteristics make it particularly interesting for electrochemical wastewater treatments and especially due to its high overpotential for the Oxygen Evolution Reaction. Diamond electrodes present the maximum efficiency in pollutant removal in water, just limited by diffusion-controlled electrochemical kinetics. Results are presented for the elimination of benzoic acid and for the electrochemical treatment of synthetic tannery wastewater. The results indicate that diamond electrodes exhibit the best performance for the removal of total phenols, COD, TOC, and colour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of sustainable materials is becoming a common practice for noise abatement in building and civil engineering industries. In this context, many applications have been found for porous concrete made from lightweight aggregates. This work investigates the acoustic properties of porous concrete made from arlite and vermiculite lightweight aggregates. These natural resources can still be regarded as sustainable since they can be recycled and do not generate environmentally hazardous waste. The experimental basis used consists of different type specimens whose acoustic performance is assessed in an impedance tube. Additionally, a simple theoretical model for granular porous media, based on parameters measurable with basic experimental procedures, is adopted to predict the acoustic properties of the prepared mixes. The theoretical predictions compare well with the absorption measurements. Preliminary results show the good absorption capability of these materials, making them a promising alternative to traditional porous concrete solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

5% copper catalysts with Ce0.8M0.2Oδ supports (M = Zr, La, Ce, Pr or Nd) have been studied by rapid-scan operando DRIFTS for NOx Storage and Reduction (NSR) with high frequency (30 s) CO, H2 and 50%CO + 50%H2 micropulses. In the absence of reductant pulses, below 200–250 °C NOx was stored on the catalysts as nitrite and nitro groups, and above this temperature nitrates were the main species identified. The thermal stability of the NOx species stored on the catalysts depended on the acid/basic character of the dopant (M more acidic = NOx stored less stable ⇒ Zr4+ < none < Nd3+ < Pr3+ < La3+ ⇐ M more basic = NOx stored more stable). Catalysts regeneration was more efficient with H2 than with CO, and the CO + H2 mixture presented an intermediate behavior, but with smaller differences among the series of catalyst than observed using CO alone. N2 is the main NOx reduction product upon H2 regeneration. The highest NOx removal in NSR experiments performed at 400 °C with CO + H2 pulses was achieved with the catalyst with the most basic dopant (CuO/Ce0.8La0.2Oδ) while the poorest performing catalyst was that with the most acidic dopant (CuO/Ce0.8Zr0.2Oδ). The poor performance of CuO/Ce0.8Zr0.2Oδ in NSR experiments with CO pulses was attributed to its lower oxidation capacity compared to the other catalysts.