911 resultados para High dynamic range
Resumo:
Para el análisis de la respuesta estructural, tradicionalmente se ha partido de la suposición de que los nudos son totalmente rígidos o articulados. Este criterio facilita en gran medida los cálculos, pero no deja de ser una idealización del comportamiento real de las uniones. Como es lógico, entre los nudos totalmente rígidos y los nudos articulados existe una gama infinita de valores de rigidez que podrían ser adoptados. A las uniones que presentan un valor distinto de los canónicos se les denomina en sentido general uniones semirrígidas. La consideración de esta rigidez intermedia complica considerablemente los cálculos estructurales, no obstante provoca cambios en el reparto de esfuerzos dentro de la estructura, así como en la configuración de las propias uniones, que en ciertas circunstancias pueden suponer una ventaja económica. Del planteamiento expuesto en el párrafo anterior, surgen dos cuestiones que serán el germen de la tesis. Estas son: ¿Qué ocurre si se aplica el concepto de uniones semirrígidas a las naves industriales? ¿Existen unos valores determinados de rigidez en sus nudos con los que se logra optimizar la estructura? Así, surge el objetivo principal de la tesis, que no es otro que conocer la influencia de la rigidez de los nudos en los costes de los pórticos a dos aguas de estructura metálica utilizados típicamente en edificios de uso agroindustrial. Para alcanzar el objetivo propuesto, se plantea una metodología de trabajo que básicamente consiste en el estudio de una muestra representativa de pórticos sometidos a tres estados de carga: bajo, medio y alto. Su rango de luces abarca desde los 8 a los 20 m y el de la altura de pilares desde los 3,5 a los 10 m. Además, se considera que sus uniones pueden adoptar valores intermedios de rigidez. De la combinatoria de las diferentes configuraciones posibles se obtienen 46.656 casos que serán objeto de estudio. Debido al fin economicista del trabajo, se ha prestado especial atención a la obtención de los costes de ejecución de las diferentes partidas que componen la estructura, incluidas las correspondientes a las uniones. Para acometer los cálculos estructurales ha sido imprescindible contar con un soporte informático, tanto existente como de creación ex profeso, que permitiese su automatización en un contexto de optimización. Los resultados del estudio consisten básicamente en una cantidad importante de datos que para su interpretación se hace imprescindible tratarlos previamente. Este tratamiento se fundamenta en su ordenación sistemática, en la aplicación de técnicas estadísticas y en la representación gráfica. Con esto se obtiene un catálogo de gráficos en los que se representa el coste total de la estructura según los diferentes valores de rigidez de sus nudos, unas matrices resumen de resultados y unos modelos matemáticos que representan la función coste total - rigideces de nudos. Como conclusiones se puede destacar: por un lado que los costes totales de los pórticos estudiados son mínimos cuando los valores de rigidez de sus uniones son bajos, concretamente de 5•10³ a 10•10³ kN•m/rad; y por otro que la utilización en estas estructuras de uniones semirrígidas con una combinación idónea de sus rigideces, supone una ventaja económica media del 18% con respecto a las dos tipologías que normalmente se usan en este tipo de edificaciones como son los pórticos biempotrados y los biarticulados. ABSTRACT Analyzing for structural response, traditionally it started from the assumption that joints are fully rigid or pinned. This criterion makes the design significantly easier, but it is also an idealization of real joint behaviour. As is to be expected, there is an almost endless range of stiffnes value between fully rigid and pinned joints, wich could be adopted. Joints with a value other than traditional are referred to generally as semi-rigid joints. If middle stiffness is considered, the structural design becomes much more complicated, however, it causes changes in the distribution of frame stresses, as well as on joints configuration, that under certain circumstances they may suppose an economic advantage. Two questions arise from the approach outlined in the preceding subparagraph, wich are the seeds of the doctoral thesis. These are: what happens when the concept of semirigid joints is applied to industrial buildings? There are certain stiffness values in their joints with which optimization of frame is achieved? This way, the main objective of the thesis arise, which is to know the influence of stiffness of joints in the cost of the steel frames with gabled roof, that they are typically used in industrial buildings. In order to achieve the proposed goal, a work methodology is proposed, which consists in essence in a study of a representative sample of frames under three load conditions: low, middle and high. Their range of spans comprises from 8 to 20 m and range of the height of columns cover from 3,5 to 10 m. Furthermore, it is considered that their joints can adopt intermediate values of stiffness. The result of the combination of different configurations options is 46.656 cases, which will be subject of study. Due to the economic aim of this work, a particular focus has been devoted for obtaining the execution cost of the different budget items that make up the structure, including those relating to joints. In order to do the structural calculations, count with a computing support has been indispensable, existing and created expressly for this purpose, which would allows its automation in a optimization context. The results of the study basically consist in a important amount of data, whose previous processing is necesary. This process is based on his systematic arrangement, in implementation of statistical techniques and in graphical representation. This give a catalogue of graphics, which depicts the whole cost of structure according to the different stiffness of its joints, a matrixes with the summary of results and a mathematical models which represent the function whole cost - stiffness of the joints. The most remarkable conclusions are: whole costs of the frames studied are minimum when the stiffness values of their joints are low, specifically 5•10³ a 10•10³ kN•m/rad; and the use of structures with semi-rigid joints and a suitable combination of their stiffness implyes an average economic advantage of 18% over other two typologies which are used typically in this kind of buildings; these are the rigid and bi-articulated frames.
Resumo:
Nowadays, a lot of applications use digital images. For example in face recognition to detect and tag persons in photograph, for security control, and a lot of applications that can be found in smart cities, as speed control in roads or highways and cameras in traffic lights to detect drivers ignoring red light. Also in medicine digital images are used, such as x-ray, scanners, etc. These applications depend on the quality of the image obtained. A good camera is expensive, and the image obtained depends also on external factor as light. To make these applications work properly, image enhancement is as important as, for example, a good face detection algorithm. Image enhancement also can be used in normal photograph, for pictures done in bad light conditions, or just to improve the contrast of an image. There are some applications for smartphones that allow users apply filters or change the bright, colour or contrast on the pictures. This project compares four different techniques to use in image enhancement. After applying one of these techniques to an image, it will use better the whole available dynamic range. Some of the algorithms are designed for grey scale images and others for colour images. It is used Matlab software to develop and present the final results. These algorithms are Successive Means Quantization Transform (SMQT), Histogram Equalization, using Matlab function and own implemented function, and V transform. Finally, as conclusions, we can prove that Histogram equalization algorithm is the simplest of all, it has a wide variability of grey levels and it is not suitable for colour images. V transform algorithm is a good option for colour images. The algorithm is linear and requires low computational power. SMQT algorithm is non-linear, insensitive to gain and bias and it can extract structure of the data. RESUMEN. Hoy en día incontable número de aplicaciones usan imágenes digitales. Por ejemplo, para el control de la seguridad se usa el reconocimiento de rostros para detectar y etiquetar personas en fotografías o vídeos, para distintos usos de las ciudades inteligentes, como control de velocidad en carreteras o autopistas, cámaras en los semáforos para detectar a conductores haciendo caso omiso de un semáforo en rojo, etc. También en la medicina se utilizan imágenes digitales, como por ejemplo, rayos X, escáneres, etc. Todas estas aplicaciones dependen de la calidad de la imagen obtenida. Una buena cámara es cara, y la imagen obtenida depende también de factores externos como la luz. Para hacer que estas aplicaciones funciones correctamente, el tratamiento de imagen es tan importante como, por ejemplo, un buen algoritmo de detección de rostros. La mejora de la imagen también se puede utilizar en la fotografía no profesional o de consumo, para las fotos realizadas en malas condiciones de luz, o simplemente para mejorar el contraste de una imagen. Existen aplicaciones para teléfonos móviles que permiten a los usuarios aplicar filtros y cambiar el brillo, el color o el contraste en las imágenes. Este proyecto compara cuatro técnicas diferentes para utilizar el tratamiento de imagen. Se utiliza la herramienta de software matemático Matlab para desarrollar y presentar los resultados finales. Estos algoritmos son Successive Means Quantization Transform (SMQT), Ecualización del histograma, usando la propia función de Matlab y una nueva función que se desarrolla en este proyecto y, por último, una función de transformada V. Finalmente, como conclusión, podemos comprobar que el algoritmo de Ecualización del histograma es el más simple de todos, tiene una amplia variabilidad de niveles de gris y no es adecuado para imágenes en color. El algoritmo de transformada V es una buena opción para imágenes en color, es lineal y requiere baja potencia de cálculo. El algoritmo SMQT no es lineal, insensible a la ganancia y polarización y, gracias a él, se puede extraer la estructura de los datos.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
In the visual cortex, as elsewhere, N-methyl-d-aspartate receptors (NMDARs) play a critical role in triggering long-term, experience-dependent synaptic plasticity. Modifications of NMDAR subunit composition alter receptor function, and could have a large impact on the properties of synaptic plasticity. We have used immunoblot analysis to investigate the effects of age and visual experience on the expression of different NMDAR subunits in synaptoneurosomes prepared from rat visual cortices. NMDARs at birth are comprised of NR2B and NR1 subunits, and, over the first 5 postnatal weeks, there is a progressive inclusion of the NR2A subunit. Dark rearing from birth attenuates the developmental increase in NR2A. Levels of NR2A increase rapidly (in <2 hr) when dark-reared animals are exposed to light, and decrease gradually over the course of 3 to 4 days when animals are deprived of light. These data reveal that NMDAR subunit composition in the visual cortex is remarkably dynamic and bidirectionally regulated by sensory experience. We propose that NMDAR subunit regulation is a mechanism for experience-dependent modulation of synaptic plasticity in the visual cortex, and serves to maintain synaptic strength within an optimal dynamic range.
Resumo:
A hierarchical order of gene expression has been proposed to control developmental events in hematopoiesis, but direct demonstration of the temporal relationships between regulatory gene expression and differentiation has been difficult to achieve. We modified a single-cell PCR method to detect 2-fold changes in mRNA copies per cell (dynamic range, 250–250,000 copies/cell) and used it to sequentially quantitate gene expression levels as single primitive (CD34+,CD38−) progenitor cells underwent differentiation to become erythrocytes, granulocytes, or monocyte/macrophages. Markers of differentiation such as CD34 or cytokine receptor mRNAs and transcription factors associated with their regulation were assessed. All transcription factors tested were expressed in multipotent progenitors. During lineage-specific differentiation, however, distinct patterns of expression emerged. SCL, GATA-2, and GATA-1 expression sequentially extinguished during erythroid differentiation. PU.1, AML1B, and C/EBPα expression profiles and their relationship to cytokine receptor expression in maturing granulocytes could be distinguished from similar profiles in monocytic cells. These data characterize the dynamics of gene expression accompanying blood cell development and define a signature gene expression pattern for specific stages of hematopoietic differentiation.
Resumo:
A single mossy fiber input contains several release sites and is located on the proximal portion of the apical dendrite of CA3 neurons. It is, therefore, well suited to exert a strong influence on pyramidal cell excitability. Accordingly, the mossy fiber synapse has been referred to as a detonator or teacher synapse in autoassociative network models of the hippocampus. The very low firing rates of granule cells [Jung, M. W. & McNaughton, B. L. (1993) Hippocampus 3, 165–182], which give rise to the mossy fibers, raise the question of how the mossy fiber synapse temporally integrates synaptic activity. We have therefore addressed the frequency dependence of mossy fiber transmission and compared it to associational/commissural synapses in the CA3 region of the hippocampus. Paired pulse facilitation had a similar time course, but was 2-fold greater for mossy fiber synapses. Frequency facilitation, during which repetitive stimulation causes a reversible growth in synaptic transmission, was markedly different at the two synapses. At associational/commissural synapses facilitation occurred only at frequencies greater than once every 10 s and reached a magnitude of about 125% of control. At mossy fiber synapses, facilitation occurred at frequencies as low as once every 40 s and reached a magnitude of 6-fold. Frequency facilitation was dependent on a rise in intraterminal Ca2+ and activation of Ca2+/calmodulin-dependent kinase II, and was greatly reduced at synapses expressing mossy fiber long-term potentiation. These results indicate that the mossy fiber synapse is able to integrate granule cell spiking activity over a broad range of frequencies, and this dynamic range is substantially reduced by long-term potentiation.
Reciprocal electromechanical properties of rat prestin: The motor molecule from rat outer hair cells
Resumo:
Cochlear outer hair cells (OHCs) are responsible for the exquisite sensitivity, dynamic range, and frequency-resolving capacity of the mammalian hearing organ. These unique cells respond to an electrical stimulus with a cycle-by-cycle change in cell length that is mediated by molecular motors in the cells' basolateral membrane. Recent work identified prestin, a protein with similarity to pendrin-related anion transporters, as the OHC motor molecule. Here we show that heterologously expressed prestin from rat OHCs (rprestin) exhibits reciprocal electromechanical properties as known for the OHC motor protein. Upon electrical stimulation in the microchamber configuration, rprestin generates mechanical force with constant amplitude and phase up to a stimulus frequency of at least 20 kHz. Mechanical stimulation of rprestin in excised outside-out patches shifts the voltage dependence of the nonlinear capacitance characterizing the electrical properties of the molecule. The results indicate that rprestin is a molecular motor that displays reciprocal electromechanical properties over the entire frequency range relevant for mammalian hearing.
Resumo:
In this study, we implement chronic optical imaging of intrinsic signals in rat barrel cortex and repeatedly quantify the functional representation of a single whisker over time. The success of chronic imaging for more than 1 month enabled an evaluation of the normal dynamic range of this sensory representation. In individual animals for a period of several weeks, we found that: (i) the average spatial extent of the quantified functional representation of whisker C2 is surprisingly large--1.71 mm2 (area at half-height); (ii) the location of the functional representation is consistent; and (iii) there are ongoing but nonsystematic changes in spatiotemporal characteristics such as the size, shape, and response amplitude of the functional representation. These results support a modified description of the functional organization of barrel cortex, where although a precisely located module corresponds to a specific whisker, this module is dynamic, large, and overlaps considerably with the modules of many other whiskers.
Resumo:
Phototransduction systems in vertebrates and invertebrates share a great deal of similarity in overall strategy but differ significantly in the underlying molecular machinery. Both are rhodopsin-based G protein-coupled signaling cascades displaying exquisite sensitivity and broad dynamic range. However, light activation of vertebrate photoreceptors leads to activation of a cGMP-phosphodiesterase effector and the generation of a hyperpolarizing response. In contrast, activation of invertebrate photoreceptors, like Drosophila, leads to stimulation of phospholipase C and the generation of a depolarizing receptor potential. The comparative study of these two systems of phototransduction offers the opportunity to understand how similar biological problems may be solved by different molecular mechanisms of signal transduction. The study of this process in Drosophila, a system ideally suited to genetic and molecular manipulation, allows us to dissect the function and regulation of such a complex signaling cascade in its normal cellular environment. In this manuscript I review some of our recent findings and the strategies used to dissect this process.
Resumo:
Recent evidence suggests that slow anion channels in guard cells need to be activated to trigger stomatal closing and efficiently inactivated during stomatal opening. The patch-clamp technique was employed here to determine mechanisms that produce strong regulation of slow anion channels in guard cells. MgATP in guard cells, serving as a donor for phosphorylation, leads to strong activation of slow anion channels. Slow anion-channel activity was almost completely abolished by removal of cytosolic ATP or by the kinase inhibitors K-252a and H7. Nonhydrolyzable ATP, GTP, and guanosine 5'-[gamma-thio]triphosphate did not replace the ATP requirement for anion-channel activation. In addition, down-regulation of slow anion channels by ATP removal was inhibited by the phosphatase inhibitor okadaic acid. Stomatal closures in leaves induced by the plant hormone abscisic acid (ABA) and malate were abolished by kinase inhibitors and/or enhanced by okadaic acid. These data suggest that ABA signal transduction may proceed by activation of protein kinases and inhibition of an okadaic acid-sensitive phosphatase. This modulation of ABA-induced stomatal closing correlated to the large dynamic range for up- and down-regulation of slow anion channels by opposing phosphorylation and dephosphorylation events in guard cells. The presented opposing regulation by kinase and phosphatase modulators could provide important mechanisms for signal transduction by ABA and other stimuli during stomatal movements.
Resumo:
VASP (vasodilator-stimulated phosphoprotein), an established substrate of cAMP- and cGMP-dependent protein kinases in vitro and in living cells, is associated with focal adhesions, microfilaments, and membrane regions of high dynamic activity. Here, the identification of an 83-kDa protein (p83) that specifically binds VASP in blot overlays of different cell homogenates is reported. With VASP overlays as a detection tool, p83 was purified from porcine platelets and used to generate monospecific polyclonal antibodies. VASP binding to purified p83 in solid-phase binding assays and the closely matching subcellular localization in double-label immunofluorescence analyses demonstrated that both proteins also directly interact as native proteins in vitro and possibly in living cells. The subcellular distribution, the biochemical properties, as well as microsequencing data revealed that porcine platelet p83 is related to chicken gizzard zyxin and most likely represents the mammalian equivalent of the chicken protein. The VASP-p83 interaction may contribute to the targeting of VASP to focal adhesions, microfilaments, and dynamic membrane regions. Together with our recent identification of VASP as a natural ligand of the profilin poly-(L-proline) binding site, our present results suggest that, by linking profilin to zyxin/p83, VASP may participate in spatially confined profilin-regulated F-actin formation.
Resumo:
En este trabajo se han presentado las características colorimétricas de una pantalla OLED, valorando la luminancia, rango dinámico, constancia de primarios, aditividad y dependencia de canales, además de comprobar si puede aplicarse un método físico de caracterización. También, se ha evaluado la gama de color reproducible por este dispositivo considerando el sólido de color teórico asociado al mismo. Se ha comprobado que esta pantalla OLED presenta una buena constancia de cromaticidad de los primarios, pero un nivel de aditividad bajo, hecho que no garantiza que pueda utilizarse el método de caracterización GOG directamente, sino que tenga que realizarse una modificación para asegurar una buena caracterización. También, se ha comprobado que la gama real de colores es más pequeña que la gama de color teórica obtenida a partir del blanco de la pantalla. No obstante, este trabajo es un estudio preliminar que debería completarse con el estudio de diferentes dispositivos basados en tecnología OLED con el fin de conocer adecuadamente sus propiedades colorimétricas.
Resumo:
Object tracking with subpixel accuracy is of fundamental importance in many fields since it provides optimal performance at relatively low-cost. Although there are many theoretical proposals that lead to resolution increments of several orders of magnitude, in practice, this resolution is limited by the imaging systems. In this paper we propose and demonstrate through numerical models a realistic limit for subpixel accuracy. The final result is that maximum achievable resolution enhancement is connected with the dynamic range of the image, i.e. the detection limit is 1/2^(nr.bits). Results here presented may help to proper design of superresolution experiments in microscopy, surveillance, defense and other fields.
Resumo:
Object tracking with subpixel accuracy is of fundamental importance in many fields since it provides optimal performance at relatively low cost. Although there are many theoretical proposals that lead to resolution increments of several orders of magnitude, in practice this resolution is limited by the imaging systems. In this paper we propose and demonstrate through simple numerical models a realistic limit for subpixel accuracy. The final result is that maximum achievable resolution enhancement is connected with the dynamic range of the image, i.e., the detection limit is 1/2∧(nr.bits). The results here presented may aid in proper design of superresolution experiments in microscopy, surveillance, defense, and other fields.
Resumo:
Seasonal collections were made from 3 stations in a brackish lagoon near Kiel/Germany from December 1964 to June 1967. In addition 120 samples were taken in June 1966 to investigate the general pattern of distribution. Two species of the offshore fauna were found to dominate the lagoon (high population densities): Cribrononion articulatum and Miliammina fusca. The 'Vegetation zone' of the lagoon contains an assemblage of seven euryhaline arenaceous species. All of them were previously recorded from different regions of the world. - C. articulatum seems to prefer shallow water with a high daily range of water temperature (up to 30° Cels.). Population density and distribution show considerable differences between the different years. Size distribution curves of C. articulatum indicate main reproduction activity in spring and subsequent growth in uniform populations. Growth is terminated after six months but most of the specimens will either die in winter or reproduce the next spring; only a smaller amount is reproducing in summer or autumn. - Annual differences of the observed degree make it difficult to calculate foraminiferal productivity in a lagoonal environment and require seasonal observation over a period of at least 3 or 4 years.